Q: 8
When documenting information about machine learning (ML) models, which of the following
artifacts BEST helps enhance stakeholder trust?
Options
Discussion
C . Model cards are designed for transparency and cover intended use, risks, and performance in a way non-technical folks can digest. B is great for internal controls but doesn't address broad trust. Pretty sure C is what they want, but open to counterpoints.
C or B? Had something like this in a mock and C was correct because model cards explain limitations and intent in plain language, not just technical details. That helps everyone understand what the model does. Pretty sure it's C here, anyone disagree?
Why is it always "model card" for these trust questions? I've seen similar on other practice sets, and C is the only option that actually targets both technical and non-technical stakeholders, not just internal devs.
Maybe B since I had something like this in a mock and data quality controls were highlighted. They help ensure input data is reliable which should boost trust, right? Not 100% on this one.
Option C Model cards are specifically meant for sharing model details with a wide range of stakeholders. Hyperparameters and data controls matter but don't give a full, understandable picture. Not 100% if everyone uses model cards in production but that's the intent. Disagree?
B tbh, since data quality controls are often highlighted in official guides as key for stakeholder confidence on the exam.
I don’t think it’s B. C provides a structured summary for everyone, not just technical teams, so stakeholders actually understand the model. B is important but doesn’t show transparency the same way. Agree?
C is the way to go here. Model cards give a clear overview to all stakeholders, not just technical folks, so they're best for building trust. B is helpful but not as broad. Pretty confident, but let me know if you see it differently.
C
B
Be respectful. No spam.