1. Amazon SageMaker Developer Guide - Amazon SageMaker Experiments: "Amazon SageMaker Experiments lets you organize, track, compare, and evaluate your machine learning (ML) experiments... You can track the inputs (data, parameters, and algorithms) and outputs (metrics and model artifacts) of your experiments as trials."
2. Amazon SageMaker Developer Guide - Track Experiments with the Python SDK: This section details how to use sagemaker.experiments.Run to log parameters and metrics, which is the exact mechanism needed. It states, "You can log any information that helps you evaluate your experiment."
3. Amazon SageMaker Developer Guide - Monitor and Analyze Training Jobs Using Metrics: This page describes SageMaker Debugger's purpose: "Use Amazon SageMaker Debugger to debug, monitor, and profile your machine learning training jobs," which is distinct from experiment tracking.
4. Amazon SageMaker Developer Guide - What Is Amazon SageMaker Model Monitor?: This document clarifies Model Monitor's role: "Amazon SageMaker Model Monitor helps you monitor machine learning (ML) models in production and notifies you when quality issues arise." This confirms it is a post-deployment tool.