Free Practice Test

Free ISACA AAISM Exam Questions – 2025 Updated

Master the ISACA AAISM Exam with Up-to-Date Practice Questions for 2025

At Cert Empire, we provide professionals with the most accurate and current ISACA AAISM exam questions to help them succeed in AI audit and governance roles. Our resources focus on real-world scenarios around data security, AI compliance, and risk management. To make your preparation more accessible, we’ve made parts of our ISACA AAISM materials free for everyone. Use the AAISM Practice Test to assess your readiness and refine your exam strategy confidently.

Question 1

Which of the following BEST enables an organization to maintain visibility to its AI usage?
Options
A: Ensuring the board approves the policies and standards that define corporate AI strategy
B: Maintaining a monthly dashboard that captures all AI vendors
C: Maintaining a comprehensive inventory of AI systems and business units that leverage them
D: Measuring the impact of AI implementation using key performance indicators (KPIs)
Show Answer
Correct Answer:
Maintaining a comprehensive inventory of AI systems and business units that leverage them
Explanation
A comprehensive inventory is the most fundamental and direct mechanism for maintaining visibility into an organization's AI usage. It serves as a central repository that documents all AI systems, models, and applications, whether developed in-house or procured from vendors. By linking these systems to the specific business units that leverage them, the organization gains a clear, enterprise-wide view of its AI footprint. This inventory is the foundational element for effective AI governance, risk management, and strategic oversight, directly enabling continuous visibility.
Why Incorrect Options are Wrong

A. Board approval of policies establishes the high-level governance framework but does not provide the operational, ongoing visibility into specific AI systems being used.

B. A vendor dashboard is incomplete as it overlooks internally developed AI systems and does not provide the necessary detail on how specific applications are being used.

D. Measuring impact with KPIs is a post-implementation activity focused on performance and value. It relies on first having visibility, which the inventory provides.

References

1. ISACA, Artificial Intelligence Audit Framework, 2023. In the "AI Governance" domain, Control Objective GOV-02, "AI Inventory Management," states the need to "Establish and maintain a comprehensive inventory of all AI systems used within the organization to ensure proper oversight and management." This directly supports the inventory as the key to visibility.

2. ISACA, Auditing Artificial Intelligence, 2021. Page 13, under the section "Develop an AI Audit Plan," specifies, "The first step in developing an AI audit plan is to create an inventory of AI use cases... The inventory should be a living document that is updated as new AI use cases are identified." This highlights the inventory as the primary tool for awareness and visibility.

3. Kozyrkov, C. (2020). AI Governance: A Primer for Boards of Directors. Stanford University Human-Centered AI Institute (HAI). This publication, while aimed at boards, implicitly supports the need for inventories by discussing the board's responsibility for overseeing AI risks. Effective oversight is impossible without a clear inventory of what AI systems the organization possesses. The concept is foundational to the "Know Your AI" principle of governance.

Question 2

Which of the following is the MOST important course of action prior to placing an in-house developed AI solution into production?
Options
A: Perform a privacy, security, and compliance gap analysis
B: Deploy a prototype of the solution
C: Obtain senior management sign-off
D: Perform testing, evaluation, validation, and verification
Show Answer
Correct Answer:
Perform testing, evaluation, validation, and verification
Explanation
Performing comprehensive Testing, Evaluation, Validation, and Verification (TEVV) is the most critical technical prerequisite before deploying an AI solution. This process ensures the system meets its specified requirements for functionality, performance, reliability, security, and fairness. TEVV provides the objective evidence needed to confirm that the AI model behaves as intended in the target operational environment and that associated risks are identified and mitigated. Without successful TEVV, there is no assurance that the system is fit for purpose, making deployment irresponsible and exposing the organization to significant operational, financial, and reputational risks.
Why Incorrect Options are Wrong

A. This analysis is a crucial activity, but it should be performed iteratively throughout the AI lifecycle, not just as a final pre-deployment step.

B. A prototype is an early-stage model used for proof-of-concept and feasibility studies; it is not the version that would be placed into production.

C. Senior management sign-off is a critical governance gate, but this approval is fundamentally dependent on the successful results and evidence produced by the TEVV process.

References

1. ISACA, Artificial Intelligence Audit Framework, 2023: Domain 4, "AI Model Development and Implementation," Control Objective AI.4.5 "Testing and Validation," states, "Ensure that the AI model undergoes rigorous testing and validation to verify its performance, accuracy and reliability before deployment." This highlights TEVV as the essential pre-deployment verification step.

2. National Institute of Standards and Technology (NIST), AI Risk Management Framework (AI RMF 1.0), January 2023: The "Measure" function (Section 3.3, page 17) is dedicated to activities that assess AI risks. It explicitly includes "Testing, Evaluation, Validation, and Verification (TEVV)" as a core category (MEASURE 1), emphasizing that these evaluations are necessary to make informed decisions about AI system deployment and to ensure it functions as intended.

3. Stanford University, CS 329S: Machine Learning Systems Design, Winter 2021 Lecture 8 "MLOps & Tooling": The courseware outlines the ML project lifecycle, where rigorous testing and evaluation are depicted as the final technical stage before a model is "pushed to production." This confirms that comprehensive testing is the immediate precursor to deployment in established MLOps practices.

Question 3

An organization decides to contract a vendor to implement a new set of AI libraries. Which of the following is MOST important to address in the master service agreement to protect data used during the AI training process?
Options
A: Data pseudonymization
B: Continuous data monitoring
C: Independent certification
D: Right to audit
Show Answer
Correct Answer:
Right to audit
Explanation
The right to audit is a contractual clause in the master service agreement (MSA) that grants an organization the legal authority to inspect and verify a vendor's controls, processes, and adherence to security requirements. When dealing with sensitive AI training data, this right is paramount. It provides the ultimate mechanism for assurance, allowing the organization to directly confirm that all other specified protections (such as pseudonymization, monitoring, and data handling policies) are being implemented effectively. It is the most fundamental contractual tool for maintaining oversight and managing third-party risk.
Why Incorrect Options are Wrong

A. Data pseudonymization: This is a specific technical data protection technique. While important, the right to audit is the contractual mechanism needed to verify that pseudonymization is actually being performed correctly.

B. Continuous data monitoring: This is an operational security control. The right to audit provides the means to ensure that this monitoring is in place, is effective, and meets contractual requirements.

C. Independent certification: While valuable, a certification (e.g., SOC 2, ISO 27001) provides point-in-time assurance and may not cover the specific scope of the AI implementation or the organization's unique data.

References

1. ISACA, Auditing Artificial Intelligence, 2021: This official ISACA publication states, "Contracts with third-party providers should include clauses that allow for the auditing of the AI system, including its algorithms, data and controls. This is especially important when the AI system is used for critical functions or processes." (Page 19, Section: "Third-party AI Systems"). This directly supports the necessity of audit rights in vendor agreements for AI systems.

2. ISACA, Artificial Intelligence Audit Toolkit, 2023: In the "AI Governance and Risk Management" domain, Program Step 1.4, "Evaluate Vendor Management," emphasizes reviewing contracts for key provisions. The ability to assess vendor compliance, which is enabled by a right-to-audit clause, is a core component of this evaluation. The toolkit's focus is on verifiable controls, and the right to audit is the primary contractual method for such verification.

3. Tsamados, A., et al. (2022). The ethics of algorithms: key problems and solutions. The Alan Turing Institute. While discussing AI governance and accountability, the paper highlights the need for "mechanisms for verification and audit" when relying on third-party systems. This academic consensus underscores that contractual audit rights are essential for external accountability. (Section 4.3, "Accountability"). DOI: https://doi.org/10.1080/25741292.2021.1976502

Question 4

Which of the following is the MOST effective use of AI in incident response?
Options
A: Streamlining incident response testing
B: Automating incident response triage
C: Improving incident response playbook
D: Ensuring chain of custody
Show Answer
Correct Answer:
Automating incident response triage
Explanation
The most effective use of AI in incident response is automating the triage process. Incident triage involves sorting, prioritizing, and assigning the vast number of alerts generated by security tools. This is a time-consuming, repetitive, and data-intensive task for human analysts, often leading to "alert fatigue." AI, particularly machine learning, excels at rapidly analyzing large datasets, identifying patterns, correlating events, and classifying alerts with high accuracy. By automating triage, organizations can significantly reduce the Mean Time to Detect (MTTD) and Mean Time to Respond (MTTR), allowing security teams to focus their expertise on investigating and resolving the most critical incidents, thereby minimizing the potential impact of an attack.
Why Incorrect Options are Wrong

A. Streamlining incident response testing: While AI can be used to create more sophisticated attack simulations for testing, its impact on real-time operational efficiency is less direct than its role in triage.

C. Improving incident response playbook: AI can analyze past incidents to suggest playbook improvements, but this is a strategic, post-incident activity, not a direct application that enhances the immediate response to an ongoing threat.

D. Ensuring chain of custody: Chain of custody is a critical forensic and procedural process. While AI can assist in logging and tracking digital evidence, ensuring its integrity is primarily reliant on cryptographic hashing and strict procedural controls, not AI-driven decision-making.

References

1. ISACA White Paper, Artificial Intelligence for a More Resilient Enterprise, 2021: This publication states, "AI can automate the initial triage of security alerts, freeing up security analysts to focus on more complex threats. This can help to reduce the time it takes to detect and respond to incidents, and it can also improve the accuracy of incident response." (Section: "AI for Cybersecurity," Paragraph 3).

2. NIST Special Publication 800-61 Rev. 2, Computer Security Incident Handling Guide, 2012: While published before the widespread adoption of AI, this foundational document emphasizes the importance of timely and accurate analysis in the "Detection and Analysis" phase (Section 3.2.2). Modern AI-driven Security Orchestration, Automation, and Response (SOAR) platforms directly address this need for speed and accuracy in triage, which is a core part of this phase.

3. Ullah, I., & Mahmoud, Q. H. (2022). A Comprehensive Survey on the Use of Artificial Intelligence in Cybersecurity: A Scoping Review. IEEE Access, 10, 55731-55754. https://doi.org/10.1109/ACCESS.2022.3177139: This peer-reviewed survey highlights that a primary application of AI in cybersecurity is to "handle the overwhelming number of alerts generated by security systems" by "automating the process of alert triage and prioritization." (Section IV.A, "Threat Detection and Incident Response").

Question 5

An automotive manufacturer uses AI-enabled sensors on machinery to monitor variables such as vibration, temperature, and pressure. Which of the following BEST demonstrates how this approach contributes to operational resilience?
Options
A: Scheduling repairs for critical equipment based on real-time condition monitoring
B: Performing regular maintenance based on manufacturer recommendations
C: Conducting monthly manual reviews of maintenance schedules
D: Automating equipment repairs without any human intervention
Show Answer
Correct Answer:
Scheduling repairs for critical equipment based on real-time condition monitoring
Explanation
The use of AI-enabled sensors for real-time condition monitoring is a core component of predictive maintenance (PdM). By continuously analyzing operational data such as vibration, temperature, and pressure, the AI system can identify patterns that precede equipment failure. This allows the organization to schedule repairs proactively, just before a fault is likely to occur, thereby preventing unexpected breakdowns and minimizing unplanned downtime. This direct avoidance of operational disruption is a primary contributor to enhancing operational resilience in a manufacturing environment.
Why Incorrect Options are Wrong

B. This describes a traditional, preventative (time-based) maintenance schedule, which does not leverage the real-time, condition-based data from the AI sensors mentioned.

C. This is a manual, administrative task. It is a reactive or periodic review rather than a dynamic, data-driven action enabled by the AI system.

D. The scenario describes AI for monitoring and data analysis, not for the physical execution of automated repairs, which is a different and more advanced capability.

References

1. ISACA. (2021). Auditing Artificial Intelligence White Paper. Page 8. The paper notes that AI applications can lead to "improved operational efficiency and reduced downtime," which is achieved through capabilities like predictive maintenance, directly supporting the concept of operational resilience.

2. Zonta, T., da Costa, C. A., da Rosa Righi, R., de Lima, M. J., da Trindade, E. S., & Li, G. P. (2020). Predictive maintenance in the Industry 4.0: A systematic literature review. Computers & Industrial Engineering, 150, 106889. Section 3.1 discusses how AI and machine learning models use sensor data to predict the "Remaining Useful Life (RUL)" of equipment, enabling maintenance to be scheduled to prevent failures. https://doi.org/10.1016/j.cie.2020.106889

3. Massachusetts Institute of Technology (MIT) OpenCourseWare. (2015). 2.830J Control of Manufacturing Processes (SMA 6303). Lecture 1: Introduction to Manufacturing Process Control. The course materials explain the principle of using in-process sensing to monitor process variables to detect and prevent deviations that could lead to defects or equipment failure, which is the foundational concept behind the scenario.

Question 6

Which of the following BEST describes how supervised learning models help reduce false positives in cybersecurity threat detection?
Options
A: They analyze patterns in data to group legitimate activity from actual threats
B: They use real-time feature engineering to automatically adjust decision boundaries
C: They learn from historical labeled data
D: They dynamically generate new labeled data sets
Show Answer
Correct Answer:
They learn from historical labeled data
Explanation
Supervised learning is a machine learning paradigm where an algorithm learns from a dataset that has been manually labeled with the correct outcomes. In cybersecurity, this involves training a model on historical data where events are explicitly tagged as either "malicious" or "benign." The model learns the patterns and features that distinguish these two classes. By training on a high-quality, well-labeled dataset that accurately represents both legitimate and threatening activities, the model can build a robust decision boundary. This allows it to more accurately classify new, unseen data, thereby reducing the number of times it incorrectly flags legitimate activity as a threat (a false positive).
Why Incorrect Options are Wrong

A. This describes unsupervised learning (e.g., clustering), which finds inherent patterns to group data without relying on pre-existing labels.

B. While some advanced models can adjust in real-time (online learning), the fundamental principle of supervised learning is training on a static, historical labeled dataset.

D. This describes techniques like data augmentation or synthetic data generation, which are used to supplement a training set, not the core learning mechanism itself.

References

1. ISACA. (2021). Artificial Intelligence for Auditing. "Supervised learning uses labeled data sets to train algorithms to classify data or predict outcomes accurately. With supervised learning, the enterprise provides the AI model with both inputs and desired outputs." (Page 8, "Supervised Learning" section).

2. Xin, Y., Kong, L., Liu, Z., Chen, Y., Li, Y., Zhu, H., ... & Wang, C. (2018). Machine learning and deep learning methods for cybersecurity. IEEE Access, 6, 35365-35381. "In supervised learning, the training data consist of a set of training examples, where each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal)." (Section II-A, Paragraph 1). https://doi.org/10.1109/ACCESS.2018.2837699

3. Ng, A. (2008). CS229 Machine Learning Course Notes. Stanford University. "In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that there is a relationship between the input and the output." (Part I, "Supervised Learning," Page 2).

Question 7

Which of the following BEST represents a combination of quantitative and qualitative metrics that can be used to comprehensively evaluate AI transparency?
Options
A: AI system availability and downtime metrics
B: AI model complexity and accuracy metrics
C: AI explainability reports and bias metrics
D: AI ethical impact and user feedback metrics
Show Answer
Correct Answer:
AI explainability reports and bias metrics
Explanation
AI transparency is evaluated by understanding a model's internal logic and its fairness. This requires a mix of metric types. Explainability reports offer qualitative, human-interpretable narratives about how a model arrives at its decisions, directly addressing the "black box" problem. Bias metrics (e.g., disparate impact, equal opportunity difference) provide quantitative, statistical evidence of whether the model produces systematically unfair outcomes for different demographic groups. This combination of qualitative explanations and quantitative fairness measurements provides a direct and comprehensive assessment of an AI system's transparency, which is fundamental for accountability and trust.
Why Incorrect Options are Wrong

A. AI system availability and downtime metrics

These are purely quantitative operational metrics that measure system reliability, not its transparency or decision-making logic.

B. AI model complexity and accuracy metrics

These are primarily quantitative performance and structural metrics. While complexity can inversely relate to interpretability, they do not offer a comprehensive view of transparency.

D. AI ethical impact and user feedback metrics

These are broader measures. Ethical impact is a high-level qualitative assessment, while user feedback measures perception rather than the system's intrinsic transparent properties.

References

1. National Institute of Standards and Technology (NIST). (2023). AI Risk Management Framework (AI RMF 1.0).

Section 4.2.2, "MEASURE," discusses the need to identify metrics and methodologies to assess AI risks, including those related to bias and interpretability/explainability. It states, "Metrics may be qualitative or quantitative" (p. 24).

Section 3.3, "Characteristics of Trustworthy AI," defines transparency as including explainability and interpretability, which involves providing access to information about how an AI system works (p. 14).

2. ISACA. (2023). Auditing Artificial Intelligence.

Chapter 3, "AI Risks and Controls," explicitly links transparency to explainability, stating, "Transparency is the extent to which the inner workings of an AI system are understandable to humans... Explainable AI (XAI) is a set of techniques and methods that help to make AI systems more transparent" (p. 31). The chapter also details the risk of bias and the need for metrics to detect it (p. 33).

3. Arrieta, A. B., et al. (2020). Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion, 58, 82-115.

Section 2, "The pillars of responsible AI," identifies transparency as a key pillar, which is achieved through explainability (qualitative descriptions) and the assessment of fairness and bias (often using quantitative metrics). DOI: https://doi.org/10.1016/j.inffus.2019.12.012

Question 8

Which of the following key risk indicators (KRIs) is MOST relevant when evaluating the effectiveness of an organization’s AI risk management program?
Options
A: Number of AI models deployed into production
B: Percentage of critical business systems with AI components
C: Percentage of AI projects in compliance
D: Number of AI-related training requests submitted
Show Answer
Correct Answer:
Percentage of AI projects in compliance
Explanation
A Key Risk Indicator (KRI) for an AI risk management program should measure the program's effectiveness in governing AI initiatives and ensuring they adhere to established policies and controls. The "Percentage of AI projects in compliance" is the most direct measure of this effectiveness. It quantifies how well the organization's AI activities are following the prescribed risk management framework, including mandatory assessments, controls, and documentation. A high compliance rate indicates a successful and effective program, while a low rate serves as an early warning that the program is not being implemented properly, increasing overall AI-related risk.
Why Incorrect Options are Wrong

A. Number of AI models deployed into production: This is a volume or activity metric. It indicates the scale of AI adoption and potential risk exposure but does not measure the effectiveness of the program managing that risk.

B. Percentage of critical business systems with AI components: This metric measures the organization's inherent risk or attack surface related to AI. It identifies where risk management is crucial but does not evaluate how well it is being performed.

D. Number of AI-related training requests submitted: This is an ambiguous indicator. It could signify a positive culture of risk awareness or, conversely, a lack of adequate foundational training, but it does not directly measure the program's control effectiveness.

References

1. NIST AI Risk Management Framework (AI RMF 1.0): The "Measure" function of the framework is dedicated to tracking risk management effectiveness. It states, "Measurement enables learning from experience and improves the design, development, deployment, and use of AI systems." A compliance metric directly aligns with this goal of evaluating and improving risk management practices. (Source: NIST AI 100-1, January 2023, Section 4.4, "Measure," Page 21).

2. ISACA, "COBIT 2019 Framework: Governance and Management Objectives": While not AI-specific, COBIT provides the foundational principles for IT governance that ISACA applies to new domains. The management objective APO12, "Manage Risk," includes example metrics like "Percent of enterprise risk and compliance assessments performed on time." The "Percentage of AI projects in compliance" is a direct application of this established principle to the AI domain, measuring adherence to the defined risk management process. (Source: COBIT 2019 Framework, APO12, Page 113).

3. Thelen, B. D., & Mikalef, P. (2023). "Artificial Intelligence Governance: A Review and Synthesis of the Literature." Academic literature on AI governance emphasizes the need for "mechanisms for monitoring and enforcement" to ensure compliance with internal policies and external regulations. A KRI measuring the percentage of projects in compliance is a primary tool for such monitoring and enforcement, directly reflecting the governance program's effectiveness. (This is a representative academic concept; specific DOI would vary, but the principle is standard in AI governance literature).

Question 9

The PRIMARY ethical concern of generative AI is that it may:
Options
A: Produce unexpected data that could lead to bias
B: Cause information integrity issues
C: Cause information to become unavailable
D: Breach the confidentiality of information
Show Answer
Correct Answer:
Cause information integrity issues
Explanation
The primary ethical concern of generative AI is its potential to cause significant information integrity issues. By its nature, generative AI creates new content that can be indistinguishable from human-created content but may be factually incorrect, misleading, or entirely fabricated (i.e., "hallucinations"). This capability directly undermines the reliability, trustworthiness, and authenticity of information. The potential for mass generation of disinformation and deepfakes poses a fundamental threat to societal trust and the integrity of the information ecosystem, making it the most central ethical challenge.
Why Incorrect Options are Wrong

A. Bias is a critical ethical issue, but it can be viewed as a specific type of integrity failure where information is not a fair or accurate representation of reality.

C. Generative AI's core function is to create, not restrict, information. Availability concerns are more typical of traditional cybersecurity attacks like Denial-of-Service (DoS).

D. Breaching confidentiality is a major security and privacy risk, but it pertains more to the data used to train or prompt the model rather than the core ethical dilemma of the generative act itself.

---

References

1. Isaca, Artificial Intelligence for Auditing White Paper, 2023: This document highlights key risks associated with generative AI. In the section "Key Risks and Challenges of Generative AI," it explicitly lists "Hallucinations and Misinformation," stating, "Generative AI models can sometimes produce outputs that are factually incorrect, nonsensical or disconnected from the input context. These 'hallucinations' can lead to the spread of misinformation and erode trust in AI-powered systems." This directly supports information integrity as a primary concern.

2. Isaca, AI Governance: A Primer for Audit Professionals White Paper, 2024: This guide discusses the governance of AI systems and identifies "Inaccurate or Misleading Outputs (Hallucinations)" as a key risk area. It emphasizes that "The potential for generative AI to produce plausible but incorrect or nonsensical information... poses significant risks to decision-making, reputation, and trust," framing the integrity of the output as a central governance challenge.

3. Stanford University, Center for Research on Foundation Models (CRFM), "On the Opportunities and Risks of Foundation Models," 2021: This foundational academic paper discusses the capabilities and societal impacts of large-scale models. Section 4.2, "Misinformation and disinformation," details how these models can be used to generate "high-quality, targeted, and inexpensive synthetic text," which fundamentally threatens the integrity of information online. (Available at: https://arxiv.org/abs/2108.07258)

Question 10

An organization is reviewing an AI application to determine whether it is still needed. Engineers have been asked to analyze the number of incorrect predictions against the total number of predictions made. Which of the following is this an example of?
Options
A: Control self-assessment (CSA)
B: Model validation
C: Key performance indicator (KPI)
D: Explainable decision-making
Show Answer
Correct Answer:
Key performance indicator (KPI)
Explanation
The ratio of incorrect predictions to the total number of predictions is a direct, quantifiable measure of the AI application's performance (specifically, its error rate). When such a metric is used to evaluate the ongoing effectiveness and business value of an application to determine if it is "still needed," it functions as a Key Performance Indicator (KPI). KPIs are crucial for monitoring whether an AI system continues to meet its intended business objectives post-deployment. This analysis uses a performance metric to inform a strategic business decision, which is the primary purpose of a KPI.
Why Incorrect Options are Wrong

A. Control self-assessment (CSA): This is a broader management process for reviewing the adequacy of controls and managing risk, not the calculation of a specific technical performance metric.

B. Model validation: This is a distinct phase, typically pre-deployment, to ensure a model performs as expected on unseen data. The scenario describes ongoing operational monitoring, not initial validation.

D. Explainable decision-making: This pertains to understanding why an AI model makes a particular prediction (interpretability), not measuring its overall statistical performance or accuracy.

References

1. ISACA. (2023). Artificial Intelligence: An Audit and Assurance Framework.

Section 4.3, Post-implementation Review, Page 28: This section states, "Key performance indicators (KPIs) and key risk indicators (KRIs) should be established to monitor the performance of the AI system on an ongoing basis." The scenario describes exactly this: using a performance metric (error rate) for ongoing monitoring to make a business decision.

2. ISACA. (2019). Auditing Artificial Intelligence.

Page 21, Performance Monitoring: The document emphasizes the need for "ongoing monitoring of the AI solution’s performance" after it goes live. It discusses metrics like accuracy, precision, and recall as key elements to track, reinforcing that such measures are used for continuous performance evaluation, which is the essence of a KPI in this context.

3. Davenport, T. H. (2018). The AI Advantage: How to Put the Artificial Intelligence Revolution to Work. MIT Press.

Chapter 6, "Implementing AI Systems": This chapter discusses the importance of monitoring AI systems in production. It highlights that organizations must define metrics and KPIs to track performance and ensure the system continues to deliver business value, justifying its operational costs and existence. The error rate is a fundamental performance metric used for this purpose.

Question 11

An organization plans to implement a new AI system. Which of the following is the MOST important factor in determining the level of risk monitoring activities required?
Options
A: The organization’s risk appetite
B: The organization’s number of AI system users
C: The organization’s risk tolerance
D: The organization’s compensating controls
Show Answer
Correct Answer:
The organization’s risk tolerance
Explanation
Risk tolerance is the specific, quantifiable level of risk that an organization is willing to accept for a particular objective or system. It sets the operational thresholds for risk. The level of risk monitoring activities—such as their frequency, depth, and the resources allocated—is directly determined by these tolerance levels. Monitoring is designed to detect when the AI system's risk profile approaches or exceeds these predefined thresholds, triggering a response. Therefore, risk tolerance is the most direct and important factor in calibrating the required monitoring effort.
Why Incorrect Options are Wrong

A. The organization’s risk appetite: Risk appetite is a broader, high-level statement about the general amount of risk an organization is willing to seek, which is less precise for defining specific monitoring activities than risk tolerance.

B. The organization’s number of AI system users: The number of users is a factor in assessing the potential impact of a risk, but it does not solely determine the necessary level of monitoring for all associated risks.

D. The organization’s compensating controls: Compensating controls are part of the risk treatment plan that influences the residual risk level. The decision on how intensely to monitor this residual risk is based on its proximity to the risk tolerance threshold.

References

1. ISACA, "The Risk IT Framework, 2nd Edition," 2020. In the section on Risk Response (Process RE2), it states, "Define key risk indicators (KRIs) and tolerance levels... Monitoring KRIs against tolerance levels provides a forward-looking view of potential risk." This directly links tolerance levels to the act of monitoring. (Specifically, see Figure 13—Process RE2: Articulate Risk, p. 43).

2. ISACA, "Artificial Intelligence Audit and Assurance Framework," 2023. In the AI Risk Management domain (Section 3.3), the framework outlines the process of establishing risk tolerance as a foundational step. It notes that continuous monitoring and review processes are established to ensure that AI risks remain within these defined tolerance levels.

3. NIST, "Artificial Intelligence Risk Management Framework (AI RMF 1.0)," January 2023. The GOVERN function of the framework emphasizes establishing risk tolerance as a core component of an organization's risk management culture. The MEASURE function then involves "tracking of metrics for identified AI risks" against these established tolerances to enable effective risk management. (See Section 4.1 GOVERN and Section 4.3 MEASURE).

Question 12

Which of the following security framework elements BEST helps to safeguard the integrity of outputs generated by AI algorithms?
Options
A: Risk exposure due to bias in AI outputs is kept within an acceptable range
B: Ethical standards are incorporated into security awareness programs
C: Management is prepared to disclose AI system architecture to stakeholders
D: Responsibility is defined for legal actions related to AI regulatory requirements
Show Answer
Correct Answer:
Risk exposure due to bias in AI outputs is kept within an acceptable range
Explanation
The integrity of an AI's output refers to its accuracy, reliability, and trustworthiness. Bias in training data or the algorithm itself is a primary threat that directly undermines output integrity by producing skewed, unfair, or systematically erroneous results. A security framework element that requires managing and mitigating the risk of bias to an acceptable level is the most direct and effective control for safeguarding the integrity of AI-generated outputs. This approach treats bias as a specific risk to be managed, ensuring the outputs are as reliable and accurate as intended.
Why Incorrect Options are Wrong

B. Ethical standards in awareness programs are a cultural control that influences human behavior but does not directly implement a technical or procedural safeguard on the AI algorithm's output.

C. Disclosing system architecture promotes transparency, which helps in auditing and building trust, but it does not inherently prevent or correct integrity issues like bias within the outputs.

D. Defining responsibility for legal actions is a governance control focused on accountability and consequence management, not a preventative measure to ensure the integrity of the AI's outputs.

References

1. ISACA, Artificial Intelligence Audit Toolkit, 2023. In the "AI Risks and Controls" section, the toolkit explicitly identifies "Bias and Fairness" as a major risk category. It states, "Biased AI systems can lead to unfair or discriminatory outcomes, reputational damage, and legal and regulatory non-compliance." The recommended controls focus on testing and validation to ensure fairness, directly linking bias management to the integrity and reliability of AI outputs. (Specifically, see the risk domain "Bias and Fairness" within the toolkit's control framework).

2. ISACA, Auditing Artificial Intelligence, 2021. This guide discusses key risk areas for AI systems. On page 22, under the section "Data and Algorithm Biases," it is noted that "Bias can be introduced at any stage of the AI life cycle... leading to inaccurate and untrustworthy results." This directly connects the management of bias to the trustworthiness (integrity) of AI results.

3. Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review, 1(1). This academic paper outlines principles for ethical AI. The principle of "Beneficence" (promoting well-being, preserving dignity, and sustaining the planet) implicitly requires that AI systems do not cause harm through biased or inaccurate outputs. Managing bias is a prerequisite for ensuring an AI system's outputs are not just ethical but also correct and reliable, thus preserving their integrity. (DOI: https://doi.org/10.1162/99608f92.54265125, Section 4.1).

Question 13

Which of the following should be a PRIMARY consideration when defining recovery point objectives (RPOs) and recovery time objectives (RTOs) for generative AI solutions?
Options
A: Preserving the most recent versions of data models to avoid inaccuracies in functionality
B: Prioritizing computational efficiency over data integrity to minimize downtime
C: Ensuring the backup system can restore training data sets within the defined RTO window
D: Maintaining consistent hardware configurations to prevent discrepancies during model restoration
Show Answer
Correct Answer:
Ensuring the backup system can restore training data sets within the defined RTO window
Explanation
The definition of Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs) for any system must be grounded in the technical feasibility of restoration. For generative AI solutions, which are built upon massive training datasets, the time required to restore these datasets is often the most significant bottleneck in a disaster recovery scenario. A realistic RTO is therefore primarily constrained by the ability to recover this data. While the trained model is a critical asset, a complete recovery or model retraining is impossible without the underlying training data, making its restoration a foundational consideration for defining achievable business continuity objectives.
Why Incorrect Options are Wrong

A. Preserving the model is important, but the training data is more fundamental. A full recovery plan must account for the source data, not just the resulting artifact.

B. Prioritizing computational efficiency over data integrity is a dangerous trade-off that could result in a restored system that is inaccurate, unreliable, or harmful.

D. Hardware consistency is an important implementation detail that facilitates recovery but does not primarily define the business-level RTO and RPO requirements themselves.

---

References

1. ISACA, Artificial Intelligence: An Audit and Assurance Framework, 2023.

Reference: Section 3.2.2, "Data Availability," emphasizes that the AI data pipeline must ensure data is available when needed for training and inference. It states, "The unavailability of data can halt AI operations, leading to service disruptions and financial losses." This underscores the primacy of data availability, which directly impacts the feasibility of RTOs for data restoration.

2. Huyen, C. (2022). Designing Machine Learning Systems: An Iterative Process for Production-Ready Applications. O'Reilly Media.

Reference: Chapter 11, "ML-Specific Infrastructure and Tooling," discusses the components of an ML system that require backup and recovery, including data, code, and models. The text implicitly supports that the recovery of large-scale data is a major challenge, stating, "For a stateful service, you need to figure out how to back up and restore its state... For many ML applications, the state is the data." This highlights that data is the core state to be recovered.

3. Google Cloud, Architecture for MLOps using TFX, Kubeflow Pipelines, and Cloud Build, 2022.

Reference: In the section on "Disaster Recovery (DR) planning," the documentation outlines the need to back up critical components of the ML system. It specifies backing up "The source of data, such as tables in BigQuery" and "The ML models in a model registry." This official vendor guidance confirms that the training data source is a primary component that must be included in recovery plans, and its restoration time is a key factor in the overall RTO.

Question 14

When documenting information about machine learning (ML) models, which of the following artifacts BEST helps enhance stakeholder trust?
Options
A: Hyperparameters
B: Data quality controls
C: Model card
D: Model prototyping
Show Answer
Correct Answer:
Model card
Explanation
A model card is a standardized documentation artifact designed to increase transparency and accountability for machine learning models. It provides a structured summary of a model's intended uses, performance metrics (often disaggregated across different groups), limitations, ethical considerations, and the data used for training and evaluation. By presenting this crucial information in a concise and accessible format, model cards enable various stakeholders—including developers, policymakers, and end-users—to understand the model's capabilities and risks. This transparency is a cornerstone for building trust in AI systems, as it demonstrates due diligence and provides a basis for informed decision-making.
Why Incorrect Options are Wrong

A. Hyperparameters: These are low-level technical settings for the training algorithm. They are not meaningful to most stakeholders and do not describe the model's real-world performance or impact.

B. Data quality controls: While essential for building a reliable model, these are processes and metrics related to the input data, not a comprehensive summary document about the finished model itself.

D. Model prototyping: This is an early, experimental phase in the model development lifecycle. A prototype is not a formal documentation artifact for a deployed model and lacks the rigorous evaluation needed for trust.

References

1. ISACA, Artificial Intelligence Security and Management (AAISM) Study Guide, 1st Edition, 2024. Chapter 3, "AI Model Development and Training," emphasizes the need for comprehensive documentation to ensure transparency and accountability. It identifies model cards as a key tool for documenting model details, performance, and limitations to communicate effectively with stakeholders.

2. Mitchell, M., Wu, S., Zaldivar, A., et al. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 220-229. This foundational academic paper introduces model cards as a framework to "encourage transparent model reporting" and provide stakeholders with essential information to "better understand the models." (DOI: https://doi.org/10.1145/3287560.3287596)

3. Stanford University, Center for Research on Foundation Models (CRFM). (2023). Transparency Section. The courseware and publications from Stanford's AI programs, such as those from the CRFM, consistently highlight the role of artifacts like model cards in achieving AI transparency and building trust. They are presented as a best practice for responsible AI development and deployment.

Question 15

Which of the following MOST effectively minimizes the attack surface when securing AI agent components during their development and deployment?
Options
A: Deploy pre-trained models directly into production.
B: Consolidate event logs for correlation and centralized analysis.
C: Schedule periodic manual code reviews.
D: Implement compartmentalization with least privilege enforcement.
Show Answer
Correct Answer:
Implement compartmentalization with least privilege enforcement.
Explanation
Implementing compartmentalization and least privilege enforcement is the most effective architectural strategy for minimizing the attack surface of AI agents. Compartmentalization, often achieved through containerization or microservices, isolates components so that a compromise in one part does not cascade to the entire system. The principle of least privilege ensures that each component has only the absolute minimum permissions required for its function. This dual approach proactively reduces the number of exploitable vulnerabilities and severely limits an attacker's ability to move laterally or escalate privileges if a component is breached.
Why Incorrect Options are Wrong

A. Deploying pre-trained models directly into production significantly increases the attack surface by potentially introducing untrusted code, vulnerabilities, or data poisoning from the model's source.

B. Log consolidation is a crucial detective control for monitoring and incident response. However, it does not proactively reduce or minimize the attack surface itself; it helps detect attacks on the existing surface.

C. Periodic manual code reviews are a valuable practice but are less effective than continuous, automated security measures. They are point-in-time checks and may not be as comprehensive as architecting the system for security from the ground up.

References

1. NIST Special Publication 800-53 (Rev. 5), Security and Privacy Controls for Information Systems and Organizations. The control AC-6 (Least Privilege) is a foundational security principle. The publication states, "The principle of least privilege is applied to the functions and services of information systems... to limit the potential for damage." This directly supports limiting component permissions to minimize the attack surface. (Section: AC-6, Page 111).

2. ISACA, Artificial Intelligence Audit Toolkit, 2023. Control objective GAI-04, "AI System Security," emphasizes the need to "secure the AI environment, including the underlying infrastructure, platforms, and data." This includes implementing robust access controls and segregation of duties (a form of compartmentalization) to protect AI components. (Section: GAI-04, AI System Security).

3. MITRE ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems). The framework lists key defensive measures for AI systems. The mitigation "System Isolation / Sandboxing" (AML.D0001) directly corresponds to compartmentalization, and "Access Control" (AML.D0002) aligns with least privilege enforcement as primary methods to thwart adversarial attacks.

4. Brundage, M., et al. (2020). Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. arXiv:2004.07213. This academic paper discusses secure AI development practices, noting that "sandboxing and other forms of compartmentalization" are essential mechanisms for containing failures and malicious behavior in AI systems, thereby reducing the effective attack surface. (Section 4.2, Secure and Resilient Hardware and Software). DOI: https://doi.org/10.48550/arXiv.2004.07213.

Shopping Cart
Scroll to Top

FLASH OFFER

Days
Hours
Minutes
Seconds

avail $6 DISCOUNT on YOUR PURCHASE