Option A makes sense. Regular audits actually verify that vendors follow ethical and security standards, which is stronger than just letting them self-report or share docs. Pretty sure this is what most orgs would do in practice, but open to other thoughts.
Option A is right since audits actually verify what the vendor is doing, not just taking their word for it. Self-attestation and just sharing docs aren’t enough for real compliance. Pretty sure this is what ISACA expects here.
Probably A, because only regular audits give you independent evidence that vendors are following your AI security and ethics policies. Self-monitoring or self-attestation (like in B or D) relies too much on trust, so not as strong. Audits catch issues you might otherwise miss. Pretty sure that's the ISACA logic here but I'm open to debate!