Q: 11
Which of the following should be the PRIMARY consideration for an organization concerned about
liabilities associated with unforeseen behavior from agentic AI systems?
Options
Discussion
Option C is correct. D looks tempting but accountability model directly addresses liability concerns for agentic AI, not just risk appetite.
Option C seen on similar exam topics, makes sense since accountability model directly addresses how liability is assigned for AI decisions.
I don’t think D is right here. The question specifically wants the main factor for liability, so C (accountability model) fits best. D is more general risk tolerance, but doesn’t assign responsibility if something goes wrong. Anyone else agree?
D. not C
Its C, though I've seen similar scenarios in practice tests where D looked possible. Might help to check the official guide on governance for agentic AI systems if you're still split.
C saw a similar question show up in some exam reports and the answer was accountability model.
C , liability always ties back to how you assign accountability for decisions in these systems.
Wouldn't accountability (C) always come before setting risk thresholds (D) if the question is about legal liability, not just risk management?
C/D? C (accountability model) makes more sense for "liabilities" since legal frameworks always go back to who is responsible if an agentic AI goes off-script. D is tempting because risk thresholds matter, but if you're talking legal and ethical exposure, clearly defined accountability is the real foundation. Pretty sure it's C, but happy to hear counterpoints if you think D is stronger for liabilities specifically.
Totally see why people pick D, but liability's main issue is pinning down who’s responsible when AI goes rogue. That’s what an accountability model (C) does. Not 100% sure because wording is a bit vague, but C lines up best with how governance handles this stuff. Disagree?
Be respectful. No spam.
Q: 12
Which of the following is the MOST effective way to mitigate the risk of deepfake attacks?
Options
Discussion
Option C, not D. Provenance checks beat LLM detection since D just spots fakes but C actually blocks them.
C had this exact scenario in a practice set and provenance checks were correct there too.
C imo, provenance checks go deeper than just detection and actually prevent spread. D is tempting if you only think about identifying fakes but that's not full prevention. Saw this logic on similar exam questions.
B
Honestly D seems like the way to go here. LLMs are improving at spotting fake content, so automating detection feels pretty effective.
C or D, but I see C showing up as best practice in the ISACA official guide and practice tests. Labs that focus on media provenance checks really help drill this too. Anyone else think the exam likes C for this kind of scenario?
C vs D? C looks more solid to me since verifying provenance stops deepfakes at the source, not just after the fact. D is flashy but LLMs aren't foolproof for detection. Open if anyone thinks I'm missing something.
C is what I'd pick. Validating the data's origin directly addresses authenticity, which really stops deepfakes before they even enter the system. Saw similar logic in a practice set, but open if someone thinks otherwise.
Not A, C. Had a similar question in an exam report and provenance checks came up as best mitigation.
C tbh. Provenance checks (C) actually cut off deepfakes before they spread since you’re validating the original source. D sounds good, but LLMs can miss stuff and are only detection-not prevention. Seen similar in practice sets and C always lines up as most effective.
Be respectful. No spam.
Q: 13
In the context of generative AI, which of the following would be the MOST likely goal of penetration
testing during a red-teaming exercise?
Options
Discussion
Makes sense to go with A. Red teaming pen tests for generative AI are really about those unexpected outputs using adversarial inputs.
Option A similar topic came up on an official practice test. Exam guide focuses on adversarial inputs for pen testing.
Nah, I don't think it's B. The real point of pen testing with generative AI is to see if adversarial prompts can get unexpected outputs, so A fits best here. B is more model robustness, not red-teaming. Pretty sure that's what exam guides say.
Feels like B, seen this phrased similarly in some practice sets and official guide reviews.
Seen similar in practice sets, pretty sure it matches A. Official guide has good breakdowns on red-teaming AI too.
A tbh, saw a similar theme in some exam reports. Pen testing for gen AI usually targets unexpected outputs with crafted prompts, fits red team objectives.
Probably B, but does the question mean "most likely" from a security or reliability perspective? That could flip my pick.
B tbh
A gets my pick. In red-teaming generative AI, you're usually crafting adversarial prompts to force the model into unexpected or unsafe outputs, not just stressing it or scrambling everything. Pretty sure that's what pen testers focus on for this context, but correct me if you see it differently.
Option B this time. Stress testing the model’s decision-making gets mentioned in some practice tests as a way to see how it handles edge cases, so I think it fits for pen testing sometimes. Definitely check the official guide though if you’re not sure.
Be respectful. No spam.
Q: 14
A financial institution plans to deploy an AI system to provide credit risk assessments for loan
applications. Which of the following should be given the HIGHEST priority in the system’s design to
ensure ethical decision-making and prevent bias?
Options
Discussion
Option C D looks tempting since numbers feel fair, but human-in-the-loop (C) actually catches bias AI can miss. Similar practice questions went for C over D for the ethical bit. Open if anyone thinks D is better here.
Option C, encountered exactly similar question in my exam and it's the right pick.
I don't see how D could be right here. Human-in-the-loop (C) is the key safeguard for ethics and bias, while restricting only to financial metrics misses some real world nuances. C
Maybe C. Encountered exactly similar question in my exam and human-in-the-loop (C) was the best fit for ethics, but I could see D being tempting if it focused only on bias. Anyone else remember this one?
I don’t think it’s C. D is more about cutting out bias by sticking to just the numbers so it feels like the safer bet ethically. But I could be missing something since human review also helps. Thoughts?
Its D
C or D. If we're talking strict prevention of bias, restricting to just objective financial metrics (D) would lower the chance of human subjectivity creeping in. I get that C covers more ethics but D feels closest if it's about measurable fairness. Agree?
C is the one I'd pick here. Human review (C) means a person can spot bias an AI might miss, which covers the ethical aspect way better than just sticking to numbers like D. Pretty sure that's what they're after given "ethical" is called out.
C , saw something like this on a practice set. Human expert making the final call lets you catch bias and explain decisions. That lines up with AAISM's focus on ethical controls over just relying on objective metrics. Anyone think D could still work?
encountered exactly similar question in my exam. in practice, picked C for the ethical and bias piece.
Be respectful. No spam.
Q: 15
An organization recently introduced a generative AI chatbot that can interact with users and answer
their queries. Which of the following would BEST mitigate hallucination risk identified by the risk
team?
Options
Discussion
D . Fine-tuning is always what ISACA likes for AI hallucination cases. Saw this on similar practice exams and official guide too.
Option D fine-tuning is what actually cuts down hallucination in these AI chatbot cases.
D
Model testing sounds like it'd catch most obvious errors before rollout. A
Saw this pop up in recent exam reports. Anyone else see D picked for hallucination risk mitigation?
A or B both seem reasonable. Model testing (A) should catch hallucinations before users see them, and larger training sets (B) can help with model accuracy overall. Pretty sure one of these would work. Disagree?
Its D fine-tuning directly targets hallucinations for orgs using generative AI.
Just to be clear, is the question asking for the BEST way to mitigate hallucination risk after deployment, or during model development? If they're focused on initial deployment, I'd probably lean D, but if it's about ongoing risk management or general controls, option A might fit better.
D is the move here. Fine-tuning adapts the AI to your domain, which official guides and practice tests always point to as best for reducing hallucinations. If you want more, check the ISACA official study guide or sample exam questions. Pretty sure this matches exam logic, but open if you see it differently.
Wouldn’t D (fine-tuning) be the strongest here, since that actually adjusts the model’s outputs to your domain and data? Model testing (A) helps spot issues but doesn’t fix the root cause. Curious if anyone thinks A is even close for "BEST" in real deployment scenarios?
Be respectful. No spam.
Question 11 of 20 · Page 2 / 2