Q: 20
The processes and methods that allow human users to understand and trust the outputs produced by
AI are important in addressing which key regulatory concern?
Options
Discussion
C . That’s the definition regulators use for transparency and understanding in AI, not just general responsibility or trust.
C tbh, B feels like a trap here since explainable is the usual regulatory buzzword.
I think C, since regulatory questions usually want "explainable" not just trustworthy. B looks tempting but isn't the compliance term. Disagree?
B
Seen similar phrasing on some practice exams, so C. Official guides usually highlight explainable AI as the compliance go-to.
A isn't right here, C is the compliance keyword regulators expect. Interpretable and trustworthy are related but not what the legal frameworks call out. Pretty sure C fits best for regulatory concerns about understanding AI outputs.
D imo, responsible AI also covers trust and user understanding here not just explainability.
Probably A. Interpretable AI is what helps people understand outputs, right?
Be respectful. No spam.