Q: 6
Universal Containers (UC) implements a custom retriever to improve the accuracy of AI-generated
responses. UC notices that the retriever is returning too many irrelevant results, making the
responses less useful. What should UC do to ensure only relevant data is retrieved?
Options
Discussion
B, not A. Had something like this in a mock and changing the DMO helped improve relevancy.
Option A
Definitely A for me. Adding filters directly narrows down search results and brings in only the relevant stuff, which is exactly what's needed here. Changing the data model (B) isn't going to fix noisy data if your retriever's too broad, and C would just make it noisier. Pretty confident but wouldn't mind hearing a counterpoint.
C is just not it, A is what they'd want per the official guide and some practice tests.
I don’t think changing the DMO solves relevance here. A fixes it directly since filters cut the noise.
Its A. Filters are the quickest way to cut down on irrelevant results and focus retrieval. Changing the data model or increasing results won't solve noise, just changes the source or adds more clutter. Pretty sure this is what they'd expect.
A
Filters let you target what's relevant, so A is the most direct fix.
Maybe B. Changing the data model object could shift what gets retrieved and help limit noise, especially if irrelevant info is coming from the current object. I know filters (A) are best practice but B seems plausible to me too.
Wouldn't filters (A) actually be standard practice for targeting relevance in AI retrieval? Swapping data models (B) feels like overkill if the core issue is noisy results, not where they're coming from. Not 100 percent sure though, open to other logic here.
Be respectful. No spam.