Scenario: A claims automation system uses SageMaker AI, predicting claim approval based on vehicle damage severity and other features (age, mileage). The model must be continuously monitored for feature attribution drift in production (i.e., if the model starts prioritizing less relevant features like vehicle age over damage severity). Question- Which solution should be implemented? Options:
Option D makes more sense here. ModelExplainabilityMonitor with SHAP is designed to track feature attribution drift specifically, not just input or output distribution shifts (like C does). C is a common trap but doesn't really capture changes in how the model weighs features. Agree?