Q: 11
Which of the following should be the PRIMARY consideration for an organization concerned about
liabilities associated with unforeseen behavior from agentic AI systems?
Options
Discussion
Option C is correct. D looks tempting but accountability model directly addresses liability concerns for agentic AI, not just risk appetite.
Option C seen on similar exam topics, makes sense since accountability model directly addresses how liability is assigned for AI decisions.
I don’t think D is right here. The question specifically wants the main factor for liability, so C (accountability model) fits best. D is more general risk tolerance, but doesn’t assign responsibility if something goes wrong. Anyone else agree?
D. not C
Its C, though I've seen similar scenarios in practice tests where D looked possible. Might help to check the official guide on governance for agentic AI systems if you're still split.
C saw a similar question show up in some exam reports and the answer was accountability model.
C , liability always ties back to how you assign accountability for decisions in these systems.
Wouldn't accountability (C) always come before setting risk thresholds (D) if the question is about legal liability, not just risk management?
C/D? C (accountability model) makes more sense for "liabilities" since legal frameworks always go back to who is responsible if an agentic AI goes off-script. D is tempting because risk thresholds matter, but if you're talking legal and ethical exposure, clearly defined accountability is the real foundation. Pretty sure it's C, but happy to hear counterpoints if you think D is stronger for liabilities specifically.
Totally see why people pick D, but liability's main issue is pinning down who’s responsible when AI goes rogue. That’s what an accountability model (C) does. Not 100% sure because wording is a bit vague, but C lines up best with how governance handles this stuff. Disagree?
Be respectful. No spam.