1. ISACA. (2021). Auditing Artificial Intelligence. The AI auditing framework describes a lifecycle that includes a "Monitor and Evaluate" phase
stating
"The performance of the AI solution should be monitored on an ongoing basis to ensure that it is operating as intended." A one-time review (Option C) completely omits this critical
ongoing phase.
2. Stanford University. (2021). CS329S: Machine Learning Systems Design
Lecture 8: Data and Model Monitoring. The courseware emphasizes that "the world is not stationary" and details the necessity of monitoring for drift (concept drift
data drift). It explicitly states that models must be continuously monitored post-deployment
making a one-time check a major deficiency. (Available via Stanford's public course materials).
3. Baylor
D.
et al. (2017). TFX: A TensorFlow-Based Production-Scale Machine Learning Platform. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. This paper from Google engineers describes a production ML pipeline where continuous monitoring and validation are core components
highlighting the industry best practice of ongoing performance analysis rather than a single pre-production check. (DOI: https://doi.org/10.1145/3097983.3098021).