Q: 6
A Generative Al Engineer is tasked with improving the RAG quality by addressing its inflammatory
outputs.
Which action would be most effective in mitigating the problem of offensive text outputs?
Options
Discussion
Official guide and Databricks practice exams both target D for this kind of RAG quality question. D
Option B
D , but if "manual review" isn't realistic at scale some orgs might automate, which could change things. Anyone see otherwise?
D
I don't think B is right here. D deals directly with the root cause since offensive material will keep showing up in RAG outputs unless you curate and manually review upstream data. Notifying users (B) just sidesteps the real issue, kind of a decoy option. Saw similar on a practice test.
My vote is D is best since curating the data actually prevents bad stuff from making it into the RAG outputs. B feels reactive, not proactive. Pretty sure that's what Databricks wants here but open to other takes.
D imo, because just warning users (B) doesn't stop the offensive outputs, but D actually fixes the root issue in the upstream data. Lots of practice questions try to trick you into picking user notification when direct data curation is safer.
I think B might be right here because setting clear user expectations about RAG behavior sounds like a decent mitigation step. Letting users know what to expect could help with perception of outputs, especially if some risk of offensive content remains. Not totally sure though, since D does involve more direct control. Agree?
Be respectful. No spam.