1. ISACA. (2021). Auditing Artificial Intelligence. "Explainability is the ability to explain the reasoning behind an AI model’s decision in an understandable way. This is a key requirement for building trust and confidence in AI systems and is a critical component of AI governance and audit." (Page 11).
2. National Institute of Standards and Technology (NIST). (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1. The "Explainable and Interpretable" characteristic is a core component of trustworthy AI
stating that systems should deliver "accompanying evidence
such as a confidence measure
or an explanation of how a decision was made." (Section 3.3
Page 13). https://doi.org/10.6028/NIST.AI.100-1
3. Jobin
A.
Ienca
M.
& Vayena
E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence
1(9)
389–399. The principle of "Transparency and Explainability" is identified as a primary convergent theme in global AI ethics guidelines
essential for accountability and audit. (Table 1
Page 391). https://doi.org/10.1038/s42256-019-0088-2