I was thinking C since API Gateway can split traffic between endpoints for A/B testing too. Seems like that would let you compare models in production, right? Not sure if that would truly avoid impacting current throughput though. Let me know if I'm missing something.
Pretty sure D is right here since Amazon Transcribe custom vocab lets you add/update product names fast, you don't have to retrain the whole model. The others seem more for general AI or search stuff? Not fully confident, let me know if I missed something.
D, Only this option uses SSE-KMS with S3 for solid encryption and IAM for access control, plus CloudWatch for actual performance monitoring. The others leave out something important or use the wrong monitoring tool. I think D fits the requirements best, but open if someone sees a catch.
D imo. Only D has all the controls: KMS encryption on both S3 and Bedrock, CloudTrail for full API auditing, and CloudWatch for regional monitoring (latency/throughput). The others skip key stuff like observability or proper encryption. Pretty sure that's what the question's after but open to pushback if I missed something!
Maybe D is the most complete fit. Only D uses KMS for both S3 and Bedrock artifacts, and also calls out CloudTrail (compliance/auditing) and CloudWatch for monitoring. The others either skip encryption at rest or neglect observability. C looks close but it doesn’t mention encryption for data in S3 or deployment. I think D covers every requirement, unless I'm missing a nuance?
Seen similar on practice, has to be D. It's the only one with KMS encryption for both training and deployment artifacts, plus CloudTrail and CloudWatch in scope for compliance and observability. The others miss something needed by the scenario, like proper monitoring or full encryption.
Option D makes more sense here. ModelExplainabilityMonitor with SHAP is designed to track feature attribution drift specifically, not just input or output distribution shifts (like C does). C is a common trap but doesn't really capture changes in how the model weighs features. Agree?
Wasn't this exact question on my real exam? D matches what they expect for monitoring feature attribution drift live.
This scenario is described really clearly. I'm picking C here, since logging inference data and analyzing shifts in distributions could catch changes in how the model uses input features. Pretty sure that's enough for feature drift monitoring.