1. NIST. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0) (Publication No. AI 100-1). U.S. Department of Commerce.
Reference: In the "GOVERN" function
Core subcategory GV.SC-5
the framework discusses managing risks from third-party software
hardware
and data services. It emphasizes the need for organizations to understand and manage the risks associated with the AI system's entire supply chain.
2. ENISA. (2021). Securing Machine Learning Algorithms. European Union Agency for Cybersecurity.
Reference: Section 4.2
"Threat Taxonomy
" details supply chain attacks as a key threat category. It specifically mentions the compromise of training data
pre-trained models
and third-party libraries as attack vectors that require diligent assessment.
3. Carlini
N.
et al. (2023). Extracting Training Data from Large Language Models. In Proceedings of the 30th USENIX Security Symposium.
Reference: This paper demonstrates how vulnerabilities in models (which are often third-party components) can lead to the extraction of sensitive training data. This underscores the critical need to evaluate the security properties of all components in the GenAI supply chain
including the models themselves. (Available via USENIX proceedings and academic archives).
4. MIT OpenCourseWare. (2021). 6.S898: Deep Learning.
Reference: Lectures on "Adversarial Attacks & Defenses" often cover the concept of data poisoning and backdoor attacks. These topics inherently address the security of the AI supply chain
emphasizing that the integrity of training data and pre-trained models (often third-party) cannot be assumed and must be verified.