Seen similar on some practice sets, pretty sure it's B. Official guide and sample exams are helpful for these types.
C or D, depends. If by "build trust" they specifically want candidates to understand how decisions are made, then C is the way to go. But if their main focus was pure efficiency regardless of transparency, D might fit. Which one does the question want as the primary goal?
This was in one of my practice sets, it’s C. Agentspace is all about boosting productivity by letting AI help dig through internal docs and automate some tasks. Not really focused on legacy systems or team permissions, from what I recall.
Option C is right since RAG lets the model grab the latest data straight from your docs in Google Cloud Storage at query time. That way, no need to retrain the model every time a policy changes. It's about live retrieval, not static training or auto-summarizing. Pretty sure that's what Google's aiming for here, though let me know if anyone's seen this trip people up on similar questions.
Does anyone else think C is only right if you’re looking at deterministic compliance requirements? Generative models aren’t built for strict audit trails or totally predictable outputs, so I’m pretty sure that’s what they’re after here.
B is the trap here, since Gemini actually can process structured numerical data. The real issue is that it's built for generative tasks, not rule-based determinism. Especially in regulated finance, you need auditability and identical outputs every time. Unless I'm missing something, C nails it.
Option C makes the most sense because Gemini is built for flexible inference and content gen, not strict deterministic decision flows. Rule-based engines are way better for regulated stuff that needs exact outputs every time. Pretty sure that's what they're testing here, but correct me if you see it differently.