1. Amazon SageMaker Developer Guide: Under the section "Monitor model explainability," it states, "Amazon SageMaker Model Monitor provides the capability to monitor models in production for drift in the attribution of features... The model explainability monitor periodically generates a report that provides insights into which features are most important to your model's predictions."
Source: AWS Documentation, Amazon SageMaker Developer Guide, "Monitor models for data and model quality, bias, and explainability" -> "Monitor model explainability".
2. AWS Machine Learning Blog: In the post "Monitor in-production model explainability with Amazon SageMaker Clarify and Model Monitor," it is detailed that, "The model explainability monitor helps you understand how your model makes predictions in production. It detects feature attribution drift, which is when the relative importance of the features for a model's predictions changes over time."
Source: AWS Machine Learning Blog, "Monitor in-production model explainability with Amazon SageMaker Clarify and Model Monitor," October 20, 2021.
3. Amazon SageMaker Python SDK Documentation: The documentation for the sagemaker.modelmonitor.ModelExplainabilityMonitor class explicitly describes its function as handling "Amazon SageMaker Model Monitor explainability monitoring jobs," confirming it as the correct tool for this task.
Source: AWS Documentation, Amazon SageMaker Python SDK, sagemaker.modelmonitor module, ModelExplainabilityMonitor class definition.