Master the ISACA AAISM Exam with Up-to-Date Practice Questions for 2025
At Cert Empire, we provide professionals with the most accurate and current ISACA AAISM exam questions to help them succeed in AI audit and governance roles. Our resources focus on real-world scenarios around data security, AI compliance, and risk management. To make your preparation more accessible, we’ve made parts of our ISACA AAISM materials free for everyone. Use the AAISM Practice Test to assess your readiness and refine your exam strategy confidently.
Question 1
Show Answer
A. Board approval of policies establishes the high-level governance framework but does not provide the operational, ongoing visibility into specific AI systems being used.
B. A vendor dashboard is incomplete as it overlooks internally developed AI systems and does not provide the necessary detail on how specific applications are being used.
D. Measuring impact with KPIs is a post-implementation activity focused on performance and value. It relies on first having visibility, which the inventory provides.
1. ISACA, Artificial Intelligence Audit Framework, 2023. In the "AI Governance" domain, Control Objective GOV-02, "AI Inventory Management," states the need to "Establish and maintain a comprehensive inventory of all AI systems used within the organization to ensure proper oversight and management." This directly supports the inventory as the key to visibility.
2. ISACA, Auditing Artificial Intelligence, 2021. Page 13, under the section "Develop an AI Audit Plan," specifies, "The first step in developing an AI audit plan is to create an inventory of AI use cases... The inventory should be a living document that is updated as new AI use cases are identified." This highlights the inventory as the primary tool for awareness and visibility.
3. Kozyrkov, C. (2020). AI Governance: A Primer for Boards of Directors. Stanford University Human-Centered AI Institute (HAI). This publication, while aimed at boards, implicitly supports the need for inventories by discussing the board's responsibility for overseeing AI risks. Effective oversight is impossible without a clear inventory of what AI systems the organization possesses. The concept is foundational to the "Know Your AI" principle of governance.
Question 2
Show Answer
A. This analysis is a crucial activity, but it should be performed iteratively throughout the AI lifecycle, not just as a final pre-deployment step.
B. A prototype is an early-stage model used for proof-of-concept and feasibility studies; it is not the version that would be placed into production.
C. Senior management sign-off is a critical governance gate, but this approval is fundamentally dependent on the successful results and evidence produced by the TEVV process.
1. ISACA, Artificial Intelligence Audit Framework, 2023: Domain 4, "AI Model Development and Implementation," Control Objective AI.4.5 "Testing and Validation," states, "Ensure that the AI model undergoes rigorous testing and validation to verify its performance, accuracy and reliability before deployment." This highlights TEVV as the essential pre-deployment verification step.
2. National Institute of Standards and Technology (NIST), AI Risk Management Framework (AI RMF 1.0), January 2023: The "Measure" function (Section 3.3, page 17) is dedicated to activities that assess AI risks. It explicitly includes "Testing, Evaluation, Validation, and Verification (TEVV)" as a core category (MEASURE 1), emphasizing that these evaluations are necessary to make informed decisions about AI system deployment and to ensure it functions as intended.
3. Stanford University, CS 329S: Machine Learning Systems Design, Winter 2021 Lecture 8 "MLOps & Tooling": The courseware outlines the ML project lifecycle, where rigorous testing and evaluation are depicted as the final technical stage before a model is "pushed to production." This confirms that comprehensive testing is the immediate precursor to deployment in established MLOps practices.
Question 3
Show Answer
A. Data pseudonymization: This is a specific technical data protection technique. While important, the right to audit is the contractual mechanism needed to verify that pseudonymization is actually being performed correctly.
B. Continuous data monitoring: This is an operational security control. The right to audit provides the means to ensure that this monitoring is in place, is effective, and meets contractual requirements.
C. Independent certification: While valuable, a certification (e.g., SOC 2, ISO 27001) provides point-in-time assurance and may not cover the specific scope of the AI implementation or the organization's unique data.
1. ISACA, Auditing Artificial Intelligence, 2021: This official ISACA publication states, "Contracts with third-party providers should include clauses that allow for the auditing of the AI system, including its algorithms, data and controls. This is especially important when the AI system is used for critical functions or processes." (Page 19, Section: "Third-party AI Systems"). This directly supports the necessity of audit rights in vendor agreements for AI systems.
2. ISACA, Artificial Intelligence Audit Toolkit, 2023: In the "AI Governance and Risk Management" domain, Program Step 1.4, "Evaluate Vendor Management," emphasizes reviewing contracts for key provisions. The ability to assess vendor compliance, which is enabled by a right-to-audit clause, is a core component of this evaluation. The toolkit's focus is on verifiable controls, and the right to audit is the primary contractual method for such verification.
3. Tsamados, A., et al. (2022). The ethics of algorithms: key problems and solutions. The Alan Turing Institute. While discussing AI governance and accountability, the paper highlights the need for "mechanisms for verification and audit" when relying on third-party systems. This academic consensus underscores that contractual audit rights are essential for external accountability. (Section 4.3, "Accountability"). DOI: https://doi.org/10.1080/25741292.2021.1976502
Question 4
Show Answer
A. Streamlining incident response testing: While AI can be used to create more sophisticated attack simulations for testing, its impact on real-time operational efficiency is less direct than its role in triage.
C. Improving incident response playbook: AI can analyze past incidents to suggest playbook improvements, but this is a strategic, post-incident activity, not a direct application that enhances the immediate response to an ongoing threat.
D. Ensuring chain of custody: Chain of custody is a critical forensic and procedural process. While AI can assist in logging and tracking digital evidence, ensuring its integrity is primarily reliant on cryptographic hashing and strict procedural controls, not AI-driven decision-making.
1. ISACA White Paper, Artificial Intelligence for a More Resilient Enterprise, 2021: This publication states, "AI can automate the initial triage of security alerts, freeing up security analysts to focus on more complex threats. This can help to reduce the time it takes to detect and respond to incidents, and it can also improve the accuracy of incident response." (Section: "AI for Cybersecurity," Paragraph 3).
2. NIST Special Publication 800-61 Rev. 2, Computer Security Incident Handling Guide, 2012: While published before the widespread adoption of AI, this foundational document emphasizes the importance of timely and accurate analysis in the "Detection and Analysis" phase (Section 3.2.2). Modern AI-driven Security Orchestration, Automation, and Response (SOAR) platforms directly address this need for speed and accuracy in triage, which is a core part of this phase.
3. Ullah, I., & Mahmoud, Q. H. (2022). A Comprehensive Survey on the Use of Artificial Intelligence in Cybersecurity: A Scoping Review. IEEE Access, 10, 55731-55754. https://doi.org/10.1109/ACCESS.2022.3177139: This peer-reviewed survey highlights that a primary application of AI in cybersecurity is to "handle the overwhelming number of alerts generated by security systems" by "automating the process of alert triage and prioritization." (Section IV.A, "Threat Detection and Incident Response").
Question 5
Show Answer
B. This describes a traditional, preventative (time-based) maintenance schedule, which does not leverage the real-time, condition-based data from the AI sensors mentioned.
C. This is a manual, administrative task. It is a reactive or periodic review rather than a dynamic, data-driven action enabled by the AI system.
D. The scenario describes AI for monitoring and data analysis, not for the physical execution of automated repairs, which is a different and more advanced capability.
1. ISACA. (2021). Auditing Artificial Intelligence White Paper. Page 8. The paper notes that AI applications can lead to "improved operational efficiency and reduced downtime," which is achieved through capabilities like predictive maintenance, directly supporting the concept of operational resilience.
2. Zonta, T., da Costa, C. A., da Rosa Righi, R., de Lima, M. J., da Trindade, E. S., & Li, G. P. (2020). Predictive maintenance in the Industry 4.0: A systematic literature review. Computers & Industrial Engineering, 150, 106889. Section 3.1 discusses how AI and machine learning models use sensor data to predict the "Remaining Useful Life (RUL)" of equipment, enabling maintenance to be scheduled to prevent failures. https://doi.org/10.1016/j.cie.2020.106889
3. Massachusetts Institute of Technology (MIT) OpenCourseWare. (2015). 2.830J Control of Manufacturing Processes (SMA 6303). Lecture 1: Introduction to Manufacturing Process Control. The course materials explain the principle of using in-process sensing to monitor process variables to detect and prevent deviations that could lead to defects or equipment failure, which is the foundational concept behind the scenario.
Question 6
Show Answer
A. This describes unsupervised learning (e.g., clustering), which finds inherent patterns to group data without relying on pre-existing labels.
B. While some advanced models can adjust in real-time (online learning), the fundamental principle of supervised learning is training on a static, historical labeled dataset.
D. This describes techniques like data augmentation or synthetic data generation, which are used to supplement a training set, not the core learning mechanism itself.
1. ISACA. (2021). Artificial Intelligence for Auditing. "Supervised learning uses labeled data sets to train algorithms to classify data or predict outcomes accurately. With supervised learning, the enterprise provides the AI model with both inputs and desired outputs." (Page 8, "Supervised Learning" section).
2. Xin, Y., Kong, L., Liu, Z., Chen, Y., Li, Y., Zhu, H., ... & Wang, C. (2018). Machine learning and deep learning methods for cybersecurity. IEEE Access, 6, 35365-35381. "In supervised learning, the training data consist of a set of training examples, where each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal)." (Section II-A, Paragraph 1). https://doi.org/10.1109/ACCESS.2018.2837699
3. Ng, A. (2008). CS229 Machine Learning Course Notes. Stanford University. "In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that there is a relationship between the input and the output." (Part I, "Supervised Learning," Page 2).
Question 7
Show Answer
A. AI system availability and downtime metrics
These are purely quantitative operational metrics that measure system reliability, not its transparency or decision-making logic.
B. AI model complexity and accuracy metrics
These are primarily quantitative performance and structural metrics. While complexity can inversely relate to interpretability, they do not offer a comprehensive view of transparency.
D. AI ethical impact and user feedback metrics
These are broader measures. Ethical impact is a high-level qualitative assessment, while user feedback measures perception rather than the system's intrinsic transparent properties.
1. National Institute of Standards and Technology (NIST). (2023). AI Risk Management Framework (AI RMF 1.0).
Section 4.2.2, "MEASURE," discusses the need to identify metrics and methodologies to assess AI risks, including those related to bias and interpretability/explainability. It states, "Metrics may be qualitative or quantitative" (p. 24).
Section 3.3, "Characteristics of Trustworthy AI," defines transparency as including explainability and interpretability, which involves providing access to information about how an AI system works (p. 14).
2. ISACA. (2023). Auditing Artificial Intelligence.
Chapter 3, "AI Risks and Controls," explicitly links transparency to explainability, stating, "Transparency is the extent to which the inner workings of an AI system are understandable to humans... Explainable AI (XAI) is a set of techniques and methods that help to make AI systems more transparent" (p. 31). The chapter also details the risk of bias and the need for metrics to detect it (p. 33).
3. Arrieta, A. B., et al. (2020). Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion, 58, 82-115.
Section 2, "The pillars of responsible AI," identifies transparency as a key pillar, which is achieved through explainability (qualitative descriptions) and the assessment of fairness and bias (often using quantitative metrics). DOI: https://doi.org/10.1016/j.inffus.2019.12.012
Question 8
Show Answer
A. Number of AI models deployed into production: This is a volume or activity metric. It indicates the scale of AI adoption and potential risk exposure but does not measure the effectiveness of the program managing that risk.
B. Percentage of critical business systems with AI components: This metric measures the organization's inherent risk or attack surface related to AI. It identifies where risk management is crucial but does not evaluate how well it is being performed.
D. Number of AI-related training requests submitted: This is an ambiguous indicator. It could signify a positive culture of risk awareness or, conversely, a lack of adequate foundational training, but it does not directly measure the program's control effectiveness.
1. NIST AI Risk Management Framework (AI RMF 1.0): The "Measure" function of the framework is dedicated to tracking risk management effectiveness. It states, "Measurement enables learning from experience and improves the design, development, deployment, and use of AI systems." A compliance metric directly aligns with this goal of evaluating and improving risk management practices. (Source: NIST AI 100-1, January 2023, Section 4.4, "Measure," Page 21).
2. ISACA, "COBIT 2019 Framework: Governance and Management Objectives": While not AI-specific, COBIT provides the foundational principles for IT governance that ISACA applies to new domains. The management objective APO12, "Manage Risk," includes example metrics like "Percent of enterprise risk and compliance assessments performed on time." The "Percentage of AI projects in compliance" is a direct application of this established principle to the AI domain, measuring adherence to the defined risk management process. (Source: COBIT 2019 Framework, APO12, Page 113).
3. Thelen, B. D., & Mikalef, P. (2023). "Artificial Intelligence Governance: A Review and Synthesis of the Literature." Academic literature on AI governance emphasizes the need for "mechanisms for monitoring and enforcement" to ensure compliance with internal policies and external regulations. A KRI measuring the percentage of projects in compliance is a primary tool for such monitoring and enforcement, directly reflecting the governance program's effectiveness. (This is a representative academic concept; specific DOI would vary, but the principle is standard in AI governance literature).
Question 9
Show Answer
A. Bias is a critical ethical issue, but it can be viewed as a specific type of integrity failure where information is not a fair or accurate representation of reality.
C. Generative AI's core function is to create, not restrict, information. Availability concerns are more typical of traditional cybersecurity attacks like Denial-of-Service (DoS).
D. Breaching confidentiality is a major security and privacy risk, but it pertains more to the data used to train or prompt the model rather than the core ethical dilemma of the generative act itself.
---
1. Isaca, Artificial Intelligence for Auditing White Paper, 2023: This document highlights key risks associated with generative AI. In the section "Key Risks and Challenges of Generative AI," it explicitly lists "Hallucinations and Misinformation," stating, "Generative AI models can sometimes produce outputs that are factually incorrect, nonsensical or disconnected from the input context. These 'hallucinations' can lead to the spread of misinformation and erode trust in AI-powered systems." This directly supports information integrity as a primary concern.
2. Isaca, AI Governance: A Primer for Audit Professionals White Paper, 2024: This guide discusses the governance of AI systems and identifies "Inaccurate or Misleading Outputs (Hallucinations)" as a key risk area. It emphasizes that "The potential for generative AI to produce plausible but incorrect or nonsensical information... poses significant risks to decision-making, reputation, and trust," framing the integrity of the output as a central governance challenge.
3. Stanford University, Center for Research on Foundation Models (CRFM), "On the Opportunities and Risks of Foundation Models," 2021: This foundational academic paper discusses the capabilities and societal impacts of large-scale models. Section 4.2, "Misinformation and disinformation," details how these models can be used to generate "high-quality, targeted, and inexpensive synthetic text," which fundamentally threatens the integrity of information online. (Available at: https://arxiv.org/abs/2108.07258)
Question 10
Show Answer
A. Control self-assessment (CSA): This is a broader management process for reviewing the adequacy of controls and managing risk, not the calculation of a specific technical performance metric.
B. Model validation: This is a distinct phase, typically pre-deployment, to ensure a model performs as expected on unseen data. The scenario describes ongoing operational monitoring, not initial validation.
D. Explainable decision-making: This pertains to understanding why an AI model makes a particular prediction (interpretability), not measuring its overall statistical performance or accuracy.
1. ISACA. (2023). Artificial Intelligence: An Audit and Assurance Framework.
Section 4.3, Post-implementation Review, Page 28: This section states, "Key performance indicators (KPIs) and key risk indicators (KRIs) should be established to monitor the performance of the AI system on an ongoing basis." The scenario describes exactly this: using a performance metric (error rate) for ongoing monitoring to make a business decision.
2. ISACA. (2019). Auditing Artificial Intelligence.
Page 21, Performance Monitoring: The document emphasizes the need for "ongoing monitoring of the AI solution’s performance" after it goes live. It discusses metrics like accuracy, precision, and recall as key elements to track, reinforcing that such measures are used for continuous performance evaluation, which is the essence of a KPI in this context.
3. Davenport, T. H. (2018). The AI Advantage: How to Put the Artificial Intelligence Revolution to Work. MIT Press.
Chapter 6, "Implementing AI Systems": This chapter discusses the importance of monitoring AI systems in production. It highlights that organizations must define metrics and KPIs to track performance and ensure the system continues to deliver business value, justifying its operational costs and existence. The error rate is a fundamental performance metric used for this purpose.
Question 11
Show Answer
A. The organization’s risk appetite: Risk appetite is a broader, high-level statement about the general amount of risk an organization is willing to seek, which is less precise for defining specific monitoring activities than risk tolerance.
B. The organization’s number of AI system users: The number of users is a factor in assessing the potential impact of a risk, but it does not solely determine the necessary level of monitoring for all associated risks.
D. The organization’s compensating controls: Compensating controls are part of the risk treatment plan that influences the residual risk level. The decision on how intensely to monitor this residual risk is based on its proximity to the risk tolerance threshold.
1. ISACA, "The Risk IT Framework, 2nd Edition," 2020. In the section on Risk Response (Process RE2), it states, "Define key risk indicators (KRIs) and tolerance levels... Monitoring KRIs against tolerance levels provides a forward-looking view of potential risk." This directly links tolerance levels to the act of monitoring. (Specifically, see Figure 13—Process RE2: Articulate Risk, p. 43).
2. ISACA, "Artificial Intelligence Audit and Assurance Framework," 2023. In the AI Risk Management domain (Section 3.3), the framework outlines the process of establishing risk tolerance as a foundational step. It notes that continuous monitoring and review processes are established to ensure that AI risks remain within these defined tolerance levels.
3. NIST, "Artificial Intelligence Risk Management Framework (AI RMF 1.0)," January 2023. The GOVERN function of the framework emphasizes establishing risk tolerance as a core component of an organization's risk management culture. The MEASURE function then involves "tracking of metrics for identified AI risks" against these established tolerances to enable effective risk management. (See Section 4.1 GOVERN and Section 4.3 MEASURE).
Question 12
Show Answer
B. Ethical standards in awareness programs are a cultural control that influences human behavior but does not directly implement a technical or procedural safeguard on the AI algorithm's output.
C. Disclosing system architecture promotes transparency, which helps in auditing and building trust, but it does not inherently prevent or correct integrity issues like bias within the outputs.
D. Defining responsibility for legal actions is a governance control focused on accountability and consequence management, not a preventative measure to ensure the integrity of the AI's outputs.
1. ISACA, Artificial Intelligence Audit Toolkit, 2023. In the "AI Risks and Controls" section, the toolkit explicitly identifies "Bias and Fairness" as a major risk category. It states, "Biased AI systems can lead to unfair or discriminatory outcomes, reputational damage, and legal and regulatory non-compliance." The recommended controls focus on testing and validation to ensure fairness, directly linking bias management to the integrity and reliability of AI outputs. (Specifically, see the risk domain "Bias and Fairness" within the toolkit's control framework).
2. ISACA, Auditing Artificial Intelligence, 2021. This guide discusses key risk areas for AI systems. On page 22, under the section "Data and Algorithm Biases," it is noted that "Bias can be introduced at any stage of the AI life cycle... leading to inaccurate and untrustworthy results." This directly connects the management of bias to the trustworthiness (integrity) of AI results.
3. Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review, 1(1). This academic paper outlines principles for ethical AI. The principle of "Beneficence" (promoting well-being, preserving dignity, and sustaining the planet) implicitly requires that AI systems do not cause harm through biased or inaccurate outputs. Managing bias is a prerequisite for ensuring an AI system's outputs are not just ethical but also correct and reliable, thus preserving their integrity. (DOI: https://doi.org/10.1162/99608f92.54265125, Section 4.1).
Question 13
Show Answer
A. Preserving the model is important, but the training data is more fundamental. A full recovery plan must account for the source data, not just the resulting artifact.
B. Prioritizing computational efficiency over data integrity is a dangerous trade-off that could result in a restored system that is inaccurate, unreliable, or harmful.
D. Hardware consistency is an important implementation detail that facilitates recovery but does not primarily define the business-level RTO and RPO requirements themselves.
---
1. ISACA, Artificial Intelligence: An Audit and Assurance Framework, 2023.
Reference: Section 3.2.2, "Data Availability," emphasizes that the AI data pipeline must ensure data is available when needed for training and inference. It states, "The unavailability of data can halt AI operations, leading to service disruptions and financial losses." This underscores the primacy of data availability, which directly impacts the feasibility of RTOs for data restoration.
2. Huyen, C. (2022). Designing Machine Learning Systems: An Iterative Process for Production-Ready Applications. O'Reilly Media.
Reference: Chapter 11, "ML-Specific Infrastructure and Tooling," discusses the components of an ML system that require backup and recovery, including data, code, and models. The text implicitly supports that the recovery of large-scale data is a major challenge, stating, "For a stateful service, you need to figure out how to back up and restore its state... For many ML applications, the state is the data." This highlights that data is the core state to be recovered.
3. Google Cloud, Architecture for MLOps using TFX, Kubeflow Pipelines, and Cloud Build, 2022.
Reference: In the section on "Disaster Recovery (DR) planning," the documentation outlines the need to back up critical components of the ML system. It specifies backing up "The source of data, such as tables in BigQuery" and "The ML models in a model registry." This official vendor guidance confirms that the training data source is a primary component that must be included in recovery plans, and its restoration time is a key factor in the overall RTO.
Question 14
Show Answer
A. Hyperparameters: These are low-level technical settings for the training algorithm. They are not meaningful to most stakeholders and do not describe the model's real-world performance or impact.
B. Data quality controls: While essential for building a reliable model, these are processes and metrics related to the input data, not a comprehensive summary document about the finished model itself.
D. Model prototyping: This is an early, experimental phase in the model development lifecycle. A prototype is not a formal documentation artifact for a deployed model and lacks the rigorous evaluation needed for trust.
1. ISACA, Artificial Intelligence Security and Management (AAISM) Study Guide, 1st Edition, 2024. Chapter 3, "AI Model Development and Training," emphasizes the need for comprehensive documentation to ensure transparency and accountability. It identifies model cards as a key tool for documenting model details, performance, and limitations to communicate effectively with stakeholders.
2. Mitchell, M., Wu, S., Zaldivar, A., et al. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 220-229. This foundational academic paper introduces model cards as a framework to "encourage transparent model reporting" and provide stakeholders with essential information to "better understand the models." (DOI: https://doi.org/10.1145/3287560.3287596)
3. Stanford University, Center for Research on Foundation Models (CRFM). (2023). Transparency Section. The courseware and publications from Stanford's AI programs, such as those from the CRFM, consistently highlight the role of artifacts like model cards in achieving AI transparency and building trust. They are presented as a best practice for responsible AI development and deployment.
Question 15
Show Answer
A. Deploying pre-trained models directly into production significantly increases the attack surface by potentially introducing untrusted code, vulnerabilities, or data poisoning from the model's source.
B. Log consolidation is a crucial detective control for monitoring and incident response. However, it does not proactively reduce or minimize the attack surface itself; it helps detect attacks on the existing surface.
C. Periodic manual code reviews are a valuable practice but are less effective than continuous, automated security measures. They are point-in-time checks and may not be as comprehensive as architecting the system for security from the ground up.
1. NIST Special Publication 800-53 (Rev. 5), Security and Privacy Controls for Information Systems and Organizations. The control AC-6 (Least Privilege) is a foundational security principle. The publication states, "The principle of least privilege is applied to the functions and services of information systems... to limit the potential for damage." This directly supports limiting component permissions to minimize the attack surface. (Section: AC-6, Page 111).
2. ISACA, Artificial Intelligence Audit Toolkit, 2023. Control objective GAI-04, "AI System Security," emphasizes the need to "secure the AI environment, including the underlying infrastructure, platforms, and data." This includes implementing robust access controls and segregation of duties (a form of compartmentalization) to protect AI components. (Section: GAI-04, AI System Security).
3. MITRE ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems). The framework lists key defensive measures for AI systems. The mitigation "System Isolation / Sandboxing" (AML.D0001) directly corresponds to compartmentalization, and "Access Control" (AML.D0002) aligns with least privilege enforcement as primary methods to thwart adversarial attacks.
4. Brundage, M., et al. (2020). Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. arXiv:2004.07213. This academic paper discusses secure AI development practices, noting that "sandboxing and other forms of compartmentalization" are essential mechanisms for containing failures and malicious behavior in AI systems, thereby reducing the effective attack surface. (Section 4.2, Secure and Resilient Hardware and Software). DOI: https://doi.org/10.48550/arXiv.2004.07213.