1. National Institute of Standards and Technology (NIST). (2023). AI Risk Management Framework (AI RMF 1.0). The "Govern" function is described as a cross-cutting activity foundational to an effective risk management program. It states
"Policies
processes
and procedures for AI risk management are established
communicated
and managed" (Section 4.1
GOVERN). This highlights policy as a primary
overarching requirement.
2. ISACA. (2023). Auditing Artificial Intelligence. The guide emphasizes the necessity of an AI governance framework
stating
"An AI governance framework should be established to ensure that AI systems are developed and used in a responsible and ethical manner... This framework should include policies
procedures and standards for AI development
deployment and use" (Chapter 2: AI Governance).
3. Russell
S.
& Norvig
P. (2021). Artificial Intelligence: A Modern Approach (4th ed.). While a broad textbook
chapters on the societal and ethical implications of AI (e.g.
Chapter 28) implicitly support the need for institutional policies to guide the application of AI technology responsibly
which must precede widespread use.
4. Stanford University Human-Centered Artificial Intelligence (HAI). (2021). Building a National AI Research Resource. Reports from institutions like Stanford HAI consistently emphasize the importance of governance structures and policies to guide AI development and deployment to ensure it aligns with societal values and organizational goals
establishing this as a prerequisite step.