1. NVIDIA NGC Documentation: The NVIDIA GPU Cloud (NGC) catalog, which is central to NVIDIA's AI ecosystem, relies on Docker containers. The documentation states, "Containers package an application with its libraries and dependencies, providing a consistent and reproducible environment for the application to run." This directly supports the role of Docker in providing a consistent environment.
Source: NVIDIA NGC Documentation, "NGC Containers User Guide," Introduction section.
2. University Courseware (Stanford): In Stanford's course on Machine Learning Systems Design, containerization with Docker is presented as a foundational practice for deployment. The course materials emphasize that Docker solves the problem of environment consistency between development and production, which is critical for reliable ML systems.
Source: Stanford University, CS 329S: Machine Learning Systems Design, Lecture on "Deployment & Monitoring," section on Containerization.
3. Peer-Reviewed Academic Publication: Research on reproducible computational science highlights containerization as a key technology. A paper on the topic states, "Docker allows researchers to package their code and all its dependencies into a container, which can then be shared and run on any other machine... ensuring that the computational environment is identical, thus leading to reproducible results." This principle is directly applicable to ML model deployment.
Source: Chirigati, F., et al. (2016). "ReproZip: Computational Reproducibility With Ease." Proceedings of the 2016 International Conference on Management of Data (SIGMOD '16), pp. 2087–2090. DOI: https://doi.org/10.1145/2882903.2903741 (While this paper introduces ReproZip, it extensively discusses the role and benefits of underlying container tech like Docker for reproducibility).