I'm picking C since safety settings are all about filtering inappropriate or harmful content-makes sense for anything facing customers. Official docs and practice sets always highlight filtering as the core function, not text length or creativity controls. Pretty sure that's what they want here, but open to hearing if someone has a different take.
Q: 4
A development team is configuring a generative AI model for a customer-facing application and
wants to ensure the generated content is appropriate and harmless. What is the primary function of
the safety settings parameter in a generative AI model?
Options
Discussion
Makes sense to pick C here since safety settings are all about filtering out risky or harmful outputs from the model.
Option C again with the wordy vendor phrasing. Seen it a few times now on practice sets, always means content filtering for user safety not length or randomness. Anyone see Google ever use D for this?
C. not much to add since filtering is what safety settings actually do here.
Probably C, not D. Safety settings are about filtering harmful content, not tweaking creativity or randomness. D is tempting if you skim the question but here it’s really about appropriateness and user safety.
Option C, Similar question popped up on official practice, docs always link safety settings to filtering for harmful content.
Why does every vendor call basic filtering a "safety setting"? Option C
I don’t think it’s C. D. If the team wants to control how random or creative the output is, tweaking safety settings sometimes overlaps with diversity controls, especially if filtering gets too aggressive and limits output variety. Maybe I’m missing a detail, feel free to disagree.
C here. Safety settings are mainly for filtering out harmful or inappropriate outputs, exactly what the question's asking about. D is about randomness, not safety controls. Pretty sure C fits best for customer-facing apps, agree?
D imo
Be respectful. No spam.
Question 4 of 10