Q: 15
An organization recently introduced a generative AI chatbot that can interact with users and answer
their queries. Which of the following would BEST mitigate hallucination risk identified by the risk
team?
Options
Discussion
D . Fine-tuning is always what ISACA likes for AI hallucination cases. Saw this on similar practice exams and official guide too.
Option D fine-tuning is what actually cuts down hallucination in these AI chatbot cases.
D
Model testing sounds like it'd catch most obvious errors before rollout. A
Saw this pop up in recent exam reports. Anyone else see D picked for hallucination risk mitigation?
A or B both seem reasonable. Model testing (A) should catch hallucinations before users see them, and larger training sets (B) can help with model accuracy overall. Pretty sure one of these would work. Disagree?
Its D fine-tuning directly targets hallucinations for orgs using generative AI.
Just to be clear, is the question asking for the BEST way to mitigate hallucination risk after deployment, or during model development? If they're focused on initial deployment, I'd probably lean D, but if it's about ongoing risk management or general controls, option A might fit better.
D is the move here. Fine-tuning adapts the AI to your domain, which official guides and practice tests always point to as best for reducing hallucinations. If you want more, check the ISACA official study guide or sample exam questions. Pretty sure this matches exam logic, but open if you see it differently.
Wouldn’t D (fine-tuning) be the strongest here, since that actually adjusts the model’s outputs to your domain and data? Model testing (A) helps spot issues but doesn’t fix the root cause. Curious if anyone thinks A is even close for "BEST" in real deployment scenarios?
Be respectful. No spam.