Q: 19
CASE STUDY
Please use the following answer the next question:
ABC Corp, is a leading insurance provider offering a range of coverage options to individuals. ABC has
decided to utilize artificial intelligence to streamline and improve its customer acquisition and
underwriting process, including the accuracy and efficiency of pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose large
language model (“LLM”). In particular, ABC intends to use its historical customer data—including
applications, policies, and claims—and proprietary pricing and risk strategies to provide an initial
qualification assessment of potential customers, which would then be routed .. human underwriter
for final review.
ABC and the cloud provider have completed training and testing the LLM, performed a readiness
assessment, and made the decision to deploy the LLM into production. ABC has designated an
internal compliance team to monitor the model during the first month, specifically to evaluate the
accuracy, fairness, and reliability of its output. After the first month in production, ABC realizes that
the LLM declines a higher percentage of women's loan applications due primarily to women
historically receiving lower salaries than men.
During the first month when ABC monitors the model for bias, it is most important to?
Options
Discussion
A. saw this type in a practice test, it's all about ongoing disparity testing once live.
I don’t think B is the right move here. In this stage, you need to catch bias as it actually happens, so option A (continue disparity testing) makes more sense. B (analyzing data quality) is important, but it’s really a next step after you’ve found an issue through testing. Pretty sure IAPP wants ongoing output monitoring first. Anyone see a reason D could fit?
A . B is tempting, but that's more for root cause after you find the bias. Exam reports also highlight disparity testing as best first step for monitoring.
A , since ongoing disparity testing is the best way to catch real-world bias right after deployment. The other options might help long term, but the question wants what's most important during that initial monitoring window. Correct me if you think I'm missing something.
A imo, since the main thing during production monitoring is to actually spot unfair outcomes in real time. Disparity testing checks for statistical bias as it happens, not just after-the-fact detective work. Data analysis (B) matters for root cause, but you can't fix what you haven't detected yet. Makes sense? Pretty sure that's what IAPP wants here.
Honestly, these IAPP questions can be super confusing about timing. I'd go with B. During monitoring, it's critical to dig into the training/testing data quality since that's usually where bias creeps in. Disparity testing is good but feels more like a stats check after you've already noticed the issue. Not 100% on this, but that's how I've seen it framed on some similar practice sets. Disagree?
B . Analyzing the training and test data quality seems most important early on since that's usually where hidden bias creeps in. Disparity testing is useful, but feels more like something you'd do once you notice odd outcomes. Not sure if I'm missing some nuance here.
A here. The trap is B, but that's more for pre-production or root cause analysis after you spot bias. Disparity testing is key during active monitoring so you catch unfair outcomes in real time. Pretty sure about this, but open if someone thinks C is better.
A, Saw a similar question on an exam, disparity testing is what they're after here.
Be respectful. No spam.