Q: 3
CASE STUDY
Please use the following answer the next question:
XYZ Corp., a premier payroll services company that employs thousands of people globally,
isembarking on a new hiring campaign and wants to implement policies and procedures to identify
and retain the best talent. The new talent will help the company's product team expand its payroll
offerings to companies in the healthcare and transportation sectors, including in Asia.
It has become time consuming and expensive for HR to review all resumes, and they are concerned
that human reviewers might be susceptible to bias.
Address these concerns, the company is considering using a third-party Al tool to screen resumes
and assist with hiring. They have been talking to several vendors about possibly obtaining a third-
party Al-enabled hiring solution, as long as it would achieve its goals and comply with all applicable
laws.
The organization has a large procurement team that is responsible for the contracting of technology
solutions. One of the procurement team's goals is to reduce costs, and it often prefers lower-cost
solutions. Others within the company are responsible for integrating and deploying technology
solutions into the organization's operations in a responsible, cost-effective manner.
The organization is aware of the risks presented by Al hiring tools and wants to mitigate them. It also
questions how best to organize and train its existing personnel to use the Al hiring tool responsibly.
Their concerns are heightened by the fact that relevant laws vary across jurisdictions and continue to
change.
Which of the following measures should XYZ adopt to best mitigate its risk of reputational harm from
using the Al tool?
Options
Discussion
A . Testing pre- and post-deployment is probably the best real way to catch bias or issues before they damage the company's name, even if it takes some extra effort. Not 100% sure, though.
C or D? Official guides and some practice tests mention cost control and human oversight as decent risk mitigators in some scenarios. Not 100% sure here, maybe review latest IAPP sample exams to see what fits better.
A
A , it's the only one that actually tackles reputational risk by validating and monitoring bias up front and after rollout. Manual review (D) just isn't scalable with thousands of applicants. Not totally sure but makes sense given the context.
A or D but leaning toward A. Most official guides and exam reports I checked highlight pre- and post-deployment testing for AI tools as a way to actually address bias and reputational risk, not just shift blame or do full manual review. D looks safe but isn't scalable with thousands of applicants. Anyone seen a different opinion in the latest practice questions?
C vs D. I think C is safer because sticking to the lowest cost AI tool might help the procurement team's goals and avoid unnecessary expenses. Not totally sure since it could miss risk controls, but picking D feels like a trap for not scaling.
Isn't D mentioned as a safer approach in some official practice tests, despite the manual effort?
Would manual only screening (D) really scale for thousands of applicants, especially across multiple regions?
Maybe D here. Manual screening by hiring staff seems less risky if the company is really worried about reputation, even if it's not efficient. A sounds logical but D avoids AI bias traps, right?
I don’t think B is enough protection, since just pushing liability to the vendor doesn’t actually reduce XYZ’s reputation risk. Testing the AI before and after launch (A) is more proactive and aligns with current governance best practices. Manual review (D) goes against the whole point of automation. Agree?
Be respectful. No spam.