Yeah this fits the social scoring ban pretty clearly. D is correct since the EU AI Act doesn't allow these public sector predictive tools for parole decisions. I think it's a textbook example, but let me know if I'm missing any nuance here.
Uniform policies across all roles sounds right to me, so picking D here. Feels like strong consistency is foundational for governance, even with AI. I think this is a close call though since A gets mentioned a lot too. Open to other views on this one.
If the company already has strict role boundaries that can’t mix cross-functional teams, would A still count as “foundational”? Feels like in that specific setup, D could edge ahead even though A usually wins for governance basics.
Looks like it's D here. The way I see it, collecting IP and location without strong safeguards could mess with Integrity and Confidentiality under GDPR, since these are sensitive personal data. Open to being convinced otherwise though.
B is the only one that's actually a benefit, not a risk, so that fits as the exception here. Nice clear wording in this question, makes it easy to spot that candidate quality is what they're hoping to improve! If anyone disagrees let me know but pretty sure it's B.
I don’t think B is the right move here. In this stage, you need to catch bias as it actually happens, so option A (continue disparity testing) makes more sense. B (analyzing data quality) is important, but it’s really a next step after you’ve found an issue through testing. Pretty sure IAPP wants ongoing output monitoring first. Anyone see a reason D could fit?