1. University Courseware:
Ng, A., & Niu, G. (2020). CS229 Machine Learning Course Notes: Model Selection and Cross-Validation. Stanford University. In Section 2, "Cross-validation," the document states, "To get a better estimate of generalization error, we can use a procedure called cross validation... In k-fold cross validation, we partition the training set into k disjoint subsets." This directly supports using cross-validation to estimate performance without a separate test set.
2. Peer-reviewed Academic Publications:
Arlot, S., & Celisse, A. (2010). A survey of cross-validation procedures for model selection. Statistics Surveys, 4, 40-79. https://doi.org/10.1214/09-SS054. The paper's introduction (Section 1, p. 41) describes cross-validation as a method "to estimate the risk of an estimator," which is synonymous with assessing its performance on new data.
3. Official Vendor Documentation:
NVIDIA TAO Toolkit 5.2.0 Documentation. (2024). DetectNetv2: K-Fold Cross-Validation for Object Detection. In the section on "K-fold cross-validation workflow," the documentation describes the process of splitting a dataset into k folds for training and validation, which is the core mechanism for evaluating models when a dedicated test set is not used for every training iteration.