Prepare Smarter for the CCSP Exam with Our Free and Accurate CCSP Exam Questions โ Updated for 2025.
At Cert Empire, we are focused on providing the most up-to-date and reliable exam questions for students preparing for the ISC2 CCSP Exam. To help learners study better, weโve made sections of our CCSP exam resources free for everyone. You can practice as much as you like with Free CCSP Practice Test.
Question 1
Show Answer
A. Directory synchronization is typically handled by protocols like the System for Cross-domain Identity Management (SCIM), not SAML.
B. This is a vague and non-standard phrase; SAML is a specific protocol for identity federation, not a general standard for "management logistics."
C. SAML is explicitly designed to avoid exchanging raw credentials like passwords; it uses secure, digitally signed assertions (tokens) instead.
1. National Institute of Standards and Technology (NIST). (2017). NIST Special Publication 800-63C: Digital Identity Guidelines: Federation and Assertions. Section 1.1, Introduction, states, "Federation allows a subject to use attributes from an identity provider (IdP) to authenticate to a relying party (RP), often in a different security domain... This document provides requirements on the use of federated identity protocols, such as Security Assertion Markup Language (SAML)..."
2. OASIS Security Services (SAML) TC. (2005). Security Assertion Markup Language (SAML) V2.0 Technical Overview. Committee Draft 01, 25 July 2005. Section 2.1, "SAML Solves the Web Browser SSO Problem," describes the core use case as enabling a principal (user) to authenticate to an IdP and then access a resource at a service provider by exchanging authentication and authorization information.
3. Purdue University. (2012). Federated Identity Management. CERIAS Tech Report 2012-10. Page 4 discusses SAML as a primary protocol for federated identity, stating, "SAML is an XML-based framework for communicating user authentication, entitlement, and attribute information. As its name suggests, SAML allows business entities to make assertions regarding the identity, attributes, and entitlements of a subject (an entity that is often a human user) to other entities..."
Question 2
Show Answer
A. Ransomware: This is a type of malware. A WAF is not the primary defense; endpoint protection and anti-malware solutions are designed for this threat.
B. Syn floods: This is a network-layer (Layer 3/4) Denial of Service (DoS) attack. It is primarily mitigated by network firewalls and dedicated DDoS protection services, not WAFs.
D. Password cracking: This is an attack on authentication. While a WAF can help by rate-limiting login attempts, the primary defenses are strong password policies and multi-factor authentication.
1. National Institute of Standards and Technology (NIST). (2007). Guide to Secure Web Services (Special Publication 800-95). "A WAF is a device that is intended to protect a Web server from Web-based attacks... WAFs can protect against a variety of attacks, including buffer overflows, SQL injection, and cross-site scripting." (Section 4.3.2, Page 4-6).
2. The Open Web Application Security Project (OWASP). Web Application Firewall. "A web application firewall (WAF) is an application firewall for HTTP applications. It applies a set of rules to an HTTP conversation. Generally, these rules cover common attacks such as cross-site scripting (XSS) and SQL injection." (OWASP Foundation, Web Application Firewall page, Introduction).
3. Papamartzivanos, D., Mรกrmol, F. G., & Kambourakis, G. (2017). Introducing an intelligent engine for thwarting application-layer DDoS attacks. Journal of Information Security and Applications, 35, 49-59. "Web Application Firewalls (WAFs) are security solutions that aim to protect web applications from a plethora of attacks, such as SQL injection (SQLi), Cross-Site Scripting (XSS), and Remote File Inclusion (RFI)." (Section 1, Introduction, Paragraph 1). https://doi.org/10.1016/j.jisa.2017.06.002
Question 3
APIs are defined as which of the following?
Show Answer
A. This option is incomplete as it omits the crucial elements of routines and standards, which are fundamental parts of an API's definition.
C. This is too narrow. While APIs involve standards, they also explicitly define the routines, protocols, and tools needed for interaction.
D. This option is missing standards and protocols, which are essential for ensuring consistent and predictable communication between applications.
1. National Institute of Standards and Technology (NIST), Special Publication 800-204, Security Strategies for Microservices-based Application Systems, December 2019. In Section 2.1, "Acronyms," an API is defined as: "A set of routines, protocols, and tools for building software and applications." This directly supports the components listed in the correct answer.
2. Google Cloud Documentation, "What is an API?". The official documentation states: "An API is a set of routines, protocols, and tools for building software applications. An API specifies how software components should interact." This definition aligns perfectly with the chosen answer.
3. Red Hat Official Documentation, "What is an API?". The documentation defines an API as: "a set of definitions and protocols for building and integrating application software." This reinforces that an API is more than just one component, encompassing definitions (which include routines and standards) and protocols.
Question 4
Show Answer
B. This description is too general. While data masking does protect sensitive data, this statement could also describe encryption, tokenization, or access controls. It lacks the specificity of creating a substitute dataset.
C. This describes a specific masking technique known as truncation or partial masking (e.g., showing only the last four digits of a credit card), not the overarching concept of data masking.
D. This describes the "nulling out" or redaction technique, which is only one of many methods used in data masking. It is not a comprehensive definition of the entire process.
1. Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM) v4.0.7, Control ID: DSP-10 (Data Masking and Obfuscation). The control specification states, "Data masking, obfuscation, or anonymization shall be used to protect sensitive data (e.g., PII) in non-production environments (e.g., development, testing)." This directly supports the use case described in option A.
2. NIST Special Publication 800-122, Guide to Protecting the Confidentiality of Personally Identifiable Information (PII), Section 5.4.2, discusses de-identification techniques. It describes masking as a method to "replace PII with fictitious data that has a similar format and data type to the original PII." This aligns with creating inauthentic but structurally similar datasets.
3. Kadlag, S., & Jadhav, S. A. (2015). Data Masking as a Service. International Journal of Computer Applications, 116(19), 1-4. The paper states, "The main reason for applying data masking is to protect sensitive data, while providing a functional substitute for occasions when the real data is not required. For example, in user training, or software testing." (Page 1, Section 1: Introduction). DOI: 10.5120/20443-2821.
Question 5
Show Answer
B. This describes a specific use case for a sandbox (malware analysis), not the best overall definition of what a sandbox is.
C. This describes a secure enclave or a specific transactional security mechanism, which is a different concept from a general-purpose sandbox for code execution.
D. A core principle of sandboxing for testing and development is to keep it separate from the production environment to prevent any risk of compromise or instability.
1. National Institute of Standards and Technology (NIST). (n.d.). Sandbox. In CSRC Glossary. Retrieved from https://csrc.nist.gov/glossary/term/sandbox. The glossary defines a sandbox as: "A restricted, controlled execution environment that prevents potentially malicious software, such as mobile code, from accessing any system resources except for the isolated resources permitted." This supports the concept of an isolated, safe space.
2. Parno, B. (2004). The Security Architecture of the MVM Framework. Stanford University, Computer Science Department. In Section 2.1, "Sandboxing," it is stated: "The goal of a sandbox is to provide a restricted environment in which to run untrusted code. The sandbox is responsible for ensuring that the untrusted code cannot perform any malicious actions..." This aligns with the principle of a safe, isolated environment for untrusted code.
3. Zeldovich, N., & Kaashoek, F. (2014). 6.858 Computer Systems Security, Lecture 4: Confinement. Massachusetts Institute of Technology: MIT OpenCourseWare. The lecture notes state the goal of sandboxing is to "confine a process, so it can't do bad things... Run process in a restricted environment." This emphasizes the isolation and safety aspects, separate from a main system.
Question 6
Alocalizedincident or disaster can be addressed in acost-effectivemanner by usingwhich of the following?
Show Answer
A. UPS: An Uninterruptible Power Supply (UPS) only provides short-term backup power for brief outages and is not a solution for a broader disaster.
B. Generators: Generators address longer-term power failures but are a costly capital investment and only mitigate a single type of incident, not a comprehensive disaster.
D. Strict adherence to applicable regulations: This is a mandatory compliance activity, not a disaster recovery strategy. While it may improve resilience, it does not provide a direct mechanism for recovery.
1. National Institute of Standards and Technology (NIST) Special Publication 800-34 Rev. 1, Contingency Planning Guide for Federal Information Systems. Section 4.3.2, "Alternate Site," discusses reciprocal agreements as a low-cost option, stating, "Reciprocal agreements are typically the lowest-cost option to implement; however, they are very difficult to enforce." This supports the "cost-effective" nature of the solution.
2. Federal Financial Institutions Examination Council (FFIEC) IT Examination Handbook, Business Continuity Management Booklet. Appendix D, "Alternate Site Options," describes reciprocal agreements: "A reciprocal agreement is typically a no-cost or low-cost option for business continuity... The primary advantage of a reciprocal agreement is the low cost to initiate and maintain the agreement."
3. ISO/IEC 27031:2011, Information technology โ Security techniques โ Guidelines for information and communication technology readiness for business continuity. Section 6.4.3, "Recovery facilities," outlines various options for recovery. Mutual agreements are presented as an alternative to more expensive options like dedicated internal or external commercial sites, highlighting their role in a cost-benefit analysis for BC/DR planning.
Question 7
Show Answer
A. Breach alert: This is a function of security information and event management (SIEM) or intrusion detection/prevention systems (IDS/IPS), not a power management device.
B. Confidentiality: This is a data security control, typically achieved through encryption and access control mechanisms, and is unrelated to power supply functions.
C. Communication redundancy: This is a network availability strategy that involves multiple, independent communication links or paths to prevent a single point of failure.
1. Massachusetts Institute of Technology (MIT) Lincoln Laboratory. (2011). Uninterruptible Power Supply (UPS) Systems. In Engineering Division Design and Engineering Standards, Section 10.1. On page 10.1-2, it states, "A UPS is used to provide clean, conditioned, and uninterrupted AC power to a critical load." It further details how different UPS types handle power conditioning.
2. Rassool, N., & Manyage, M. (2017). A review of uninterruptible power supplies. 2017 IEEE AFRICON, Cape Town, South Africa, pp. 1114-1119. In Section II, "UPS Topologies," the paper describes how line-interactive and online UPS systems "provide power conditioning" and "filter the input power," contrasting them with the more basic standby UPS. (DOI: 10.1109/AFRICON.2017.8095601)
3. Schneider Electric. (2011). The Different Types of UPS Systems (White Paper 1, Rev. 7). In the section "Line-interactive UPS" (p. 4), it states, "This type of UPS is also able to correct minor power fluctuations (under-voltages and over-voltages) without switching to battery." This voltage regulation is a key aspect of line conditioning.
Question 8
Show Answer
A. Disk space: Insufficient disk space can halt system operations, prevent applications from writing temporary files, and cause severe performance degradation, making it essential to monitor.
B. Disk I/O usage: High disk input/output (I/O) is a primary cause of performance bottlenecks, directly affecting application speed and data access times.
C. CPU usage: This is one of the most critical metrics for performance, as sustained high CPU utilization indicates the system is overloaded and cannot process tasks efficiently.
1. Amazon Web Services (AWS) Documentation: The official AWS documentation for Amazon CloudWatch, the monitoring service for AWS cloud resources, lists key metrics for EC2 instances (virtual servers). These include CPUUtilization, DiskReadOps, DiskWriteOps, and metrics for storage volumes. Print spooling is not included as a standard monitored metric for OS performance.
Source: Amazon Web Services, "Amazon EC2 CloudWatch Metrics," Amazon CloudWatch User Guide. (Specifically, the section on "Instance metrics").
2. Microsoft Azure Documentation: Similarly, Azure Monitor for VMs collects performance data from guest operating systems. Standard metrics include "% Processor Time" (CPU), "Logical Disk Bytes/sec" (Disk I/O), and "Logical Disk Free Space." Monitoring for a specific service like a print spooler is considered custom and not a default performance counter.
Source: Microsoft, "Overview of Azure Monitor for VMs," Microsoft Docs. (Specifically, the section on "Performance").
3. University Courseware: University-level operating systems courses emphasize the monitoring of core hardware resource utilization as the basis for performance evaluation. Lectures on system performance consistently focus on CPU scheduling, memory management, and I/O efficiency as the primary areas of concern.
Source: Ousterhout, J., "Lecture 1: Introduction," CS 140: Operating Systems, Stanford University, Winter 2018, pp. 21-23. (Discusses OS goals of performance, which are tied to managing CPU, memory, and I/O).
Question 9
Identity and access management (IAM) is a security discipline that ensures which of the following?
Show Answer
A. This is incomplete. IAM includes identity verification (authentication) and lifecycle management, not just authorization.
C. This is incomplete. IAM also determines what resources an authenticated user is permitted to access (authorization).
D. This is the antithesis of IAM's purpose. IAM is designed to prevent unauthorized users from gaining access.
1. National Institute of Standards and Technology (NIST). (n.d.). What is Identity and Access Management (IAM)? NIST Computer Security Resource Center. Retrieved from https://csrc.nist.gov/projects/iam. In the overview, NIST states, "Identity and Access Management (IAM) is the security discipline that makes it possible for the right entities to use the right resources when they need to, without interference, using the devices they want to use."
2. Perrin, C. (2018). Foundations of Identity and Access Management. University of California, Berkeley, Information Security Office. In the "What is Identity and Access Management?" section, the document describes IAM as a framework for "ensuring the right people have the right access to the right resources at the right time."
3. Al-Khouri, A. M. (2012). Identity and Access Management. International Journal of Computer Science Issues (IJCSI), 9(5), 497-509. On page 498, the paper defines IAM as "a framework of policies and technologies for ensuring that the right users have the appropriate access to technology resources."
Question 10
Show Answer
A. Removing instances from the active production pool is a standard procedure to prevent users from accessing a system undergoing maintenance and to ensure a stable environment for the changes.
B. Continuous logging is crucial during maintenance to audit privileged activities, track changes, and ensure accountability, which is a fundamental security principle.
D. Preventing new logins is essential to protect data integrity and avoid disrupting user sessions while the system is in a potentially unstable state.
1. NIST Special Publication 800-53 Revision 5, Security and Privacy Controls for Information Systems and Organizations. The Maintenance (MA) control family, particularly MA-2 "Controlled Maintenance," outlines requirements for scheduling, performing, and documenting maintenance. The focus is on authorization, control, and review of maintenance activities, not on adding enhanced controls. The standard emphasizes maintaining a secure state through procedural controls. (See Section: MA-2, Page 203).
2. Cloud Security Alliance (CSA), Security Guidance for Critical Areas of Focus in Cloud Computing v4.0. Domain 5: "Cloud Security Operations" discusses the importance of a formal change management process. This process includes "logging and monitoring of privileged user activities" and ensuring changes are authorized. It does not mandate enhancing security controls during the maintenance window itself; rather, it requires that security is managed throughout the process. (See Domain 5, Page 103).
3. ISO/IEC 27017:2015, Code of practice for information security controls based on ISO/IEC 27002 for cloud services. Section 12.1.2, "Protection against malware," and 12.2, "Backup," imply that operational procedures, including maintenance, must be conducted in a way that preserves security. The guidance focuses on preventing the introduction of vulnerabilities and ensuring system integrity through controlled procedures, which aligns with logging and preventing access, but not necessarily enhancing controls.
Question 11
Show Answer
B. To reduce redundancy is a general IT architecture goal; it is not a direct trigger for changing a security configuration baseline.
C. A natural disaster initiates disaster recovery and business continuity plans, which typically involve restoring systems to their last approved baseline, not changing it.
D. Power fluctuation is a transient operational issue that requires an infrastructure-level response, not a modification of standardized system security configurations.
---
1. NIST Special Publication 800-128, Guide for Security-Focused Configuration Management of Information Systems. Section 2.4, "Configuration Control," discusses the process for managing changes. It states, "The effects of approved changes are assessed, and all relevant configuration management documentation (including the systemโs baseline configuration) is updated as necessary." A high volume of approved changes would necessitate an update to the baseline itself to reflect the new standard.
2. NIST Special Publication 800-53 Revision 5, Security and Privacy Controls for Information Systems and Organizations. Control CM-3, "Configuration Change Control," outlines the process for managing changes to the system. A pattern of consistent, approved deviations from the baseline (evidenced by numerous change requests) would trigger a review and potential update of the baseline defined in control CM-2, "Baseline Configuration," to ensure it remains "current."
3. Carnegie Mellon University, Software Engineering Institute (SEI), CERT Resilience Management Model (CERT-RMM) v1.2. The Configuration and Change Management (CCM) process area states that one of its goals is to "Establish and maintain the integrity of the configuration items of the service." A baseline that generates excessive change requests is failing to maintain integrity and relevance, indicating a need for re-evaluation and update. (See CCM Specific Goal 2).
Question 12
Show Answer
A: Users are the subjects (principals) being authenticated, not the identity provider. A CASB is a security enforcement point, not the relying party in this fundamental model.
B: This option incorrectly reverses the roles. The trusted third party is the identity provider, and the member organizations are the relying parties.
C: This describes a peer-to-peer or mesh federation model where each organization acts as both an IdP and an RP, not the trusted third-party model.
1. Cloud Security Alliance (CSA). (2017). Security Guidance for Critical Areas of Focus in Cloud Computing v4.0.
Page 128, Section: "Federation Models": "In a third-party model, all parties trust a single third party to act as the identity provider for all of them. The other parties are known as relying parties." This directly confirms that the third party is the IdP and the member organizations are the RPs.
2. National Institute of Standards and Technology (NIST). (2017). Special Publication (SP) 800-63-3: Digital Identity Guidelines.
Section 1.2, Definitions: Defines an Identity Provider (IdP) as the entity that "manages the subscriberโs primary authentication credentials and issues assertions" and a Relying Party (RP) as the entity that "uses the services of one or more IdPs...to authenticate subscribers." In the scenario, the contracted third party issues assertions, and the member organizations use them.
3. Chadwick, D. W. (2009). Federated identity management. In Foundations of Security Analysis and Design V (pp. 116-145). Springer, Berlin, Heidelberg.
Section 3.1, "The Third Party Trust Model": "In the third party model, all parties trust a single third party to act as the identity provider for all of them. The other parties are known as relying parties or service providers." This academic source explicitly defines the roles as stated in the correct answer. (DOI: https://doi.org/10.1007/978-3-642-03829-74)
Question 13
Show Answer
B. While DAM is server-focused, "client-based" is not a standard DAM architecture; monitoring is centralized at the server or on the network path.
C. DAM is a monitoring and alerting control, whereas encryption is a confidentiality control. They are complementary and not interchangeable security functions.
D. DAM monitors real-time access to data, while data masking is a technique to obfuscate sensitive data. They serve different security purposes.
1. Al-Huit, M., & Al-Daraiseh, A. (2017). An Overview of Database Activity Monitoring Systems. International Journal of Computer Science and Network Security, 17(1), 135-141. In Section 3, "DAM Architecture," the paper explicitly states: "DAM systems can be classified into two main categories based on their architecture: network-based and host-based."
2. Shankar, G., & Wahidabanu, R. S. D. (2011). A Survey on Database Security. International Journal of Computer Science and Information Technologies, 2(4), 1553-1558. This survey discusses DAM techniques, describing "Sniffing based DAM" (network-based) and "Agent based DAM" (host-based) in Section IV, "Database Auditing."
3. Kamoun, F. (2017). A Roadmap for a Proactive Database Auditing System. Proceedings of the 19th International Conference on Enterprise Information Systems (ICEIS), 1, 499-506. https://doi.org/10.5220/0006361904990506. The paper discusses DAM architectures, noting that "DAM solutions can be either network-based or host-based" (Section 2.1, Paragraph 2).
Question 14
Show Answer
A. An annotated asset inventory is crucial documentation for prioritizing the recovery of critical systems and applications.
B. A flashlight is a fundamental and practical tool for safety and navigation during a disaster, which often involves power failures.
D. Documentation equipment, such as cameras, notepads, and pens, is necessary for assessing damage and tracking recovery activities.
---
1. National Institute of Standards and Technology (NIST) Special Publication 800-34 Rev. 1, "Contingency Planning Guide for Federal Information Systems."
Reference: Appendix G, "Disaster Recovery Kit," page G-1.
Content: This appendix provides a sample checklist for a disaster recovery kit. It explicitly lists items such as "Emergency supplies (first aid kit, flashlight, batteries, etc.)" and "Documentation supplies (pens, paper, etc.)." It also includes various forms of documentation like the contingency plan and contact lists, which aligns with the need for an asset inventory. The list does not include hard drives containing production data, underscoring that they are not a standard or recommended component due to security risks.
2. Cloud Security Alliance (CSA), "Security Guidance for Critical Areas of Focus in Cloud Computing v4.0."
Reference: Domain 5: Business Continuity and Disaster Recovery, Section 5.3.2 "Recovery," page 81.
Content: The guidance emphasizes restoring services from backups located in a secure, geographically separate location. It states, "The recovery process should include steps to restore the cloud-based services from backups..." This modern approach of using secure, remote backups for restoration is fundamentally at odds with the outdated and insecure practice of carrying physical hard drives in a portable recovery kit.
Question 15
Show Answer
A. Data breach alerting and reporting is a function of an incident response plan and security monitoring, not a configuration baseline itself.
B. A baseline is a technical configuration standard; it cannot cover all regulatory requirements, which also include administrative, procedural, and physical controls.
D. A process for version control is used to manage changes to the baseline document, but it is not the content or scope of the baseline itself.
---
1. National Institute of Standards and Technology (NIST) Special Publication 800-128, Guide for Security-Focused Configuration Management of Information Systems. Section 2.1, "Establish Configuration Baselines," states: "An organization establishes configuration baselines for its information systems and system components... Baselines are then applied to all systems of a given type across the organization." This directly supports the principle of broad application.
2. National Institute of Standards and Technology (NIST) Special Publication 800-70 Rev. 4, National Checklist Program for IT Products. Section 2.2, "Security Configuration Baselines," explains that baselines are used to "ensure that a consistent, secure configuration is used throughout the organization for its IT products." This reinforces the goal of consistency across the maximum number of systems.
3. Amazon Web Services (AWS) Well-Architected Framework, Security Pillar. In the design principles for securing compute resources (SEC 04), it advises to "Implement secure baselines" and explicitly states to "Apply these baselines to all your workloads." This vendor-neutral best practice from a major cloud provider underscores the importance of comprehensive coverage.
Question 16
Show Answer
A. Cloud auditor: This role is responsible for conducting independent assessments and audits of cloud services against standards and security controls, not for creating or developing the services.
B. Inter-cloud provider: This entity focuses on providing services for connectivity and interoperability between different cloud provider environments, not on the initial creation of the core service components.
C. Cloud service broker: This role acts as an intermediary that aggregates, integrates, or customizes existing cloud services from other providers; it does not typically create new services from scratch.
1. National Institute of Standards and Technology (NIST) Special Publication 500-292, Cloud Computing Reference Architecture.
Section 3.1, "Cloud Computing Roles": This document defines the primary actors in a cloud ecosystem. The activities described in the question (creating, testing, validating) are core functions of the Cloud Provider, the entity "responsible for making a service available." The Cloud Service Developer is the specific functional role within the Cloud Provider that performs these tasks. The document also defines the Cloud Auditor (p. 8) and Cloud Broker (p. 7) roles, clearly distinguishing their functions from development.
2. Garg, S. K., Versteeg, S., & Buyya, R. (2013). A framework for ranking of cloud computing services. Future Generation Computer Systems, 29(4), 1012-1023. https://doi.org/10.1016/j.future.2012.06.006
Section 2, "Cloud Computing": This academic paper discusses the cloud ecosystem and its actors. It implicitly defines the role of developers within Cloud Providers who are responsible for "developing and deploying applications" on the cloud infrastructure, which aligns with the creation and testing of services.
3. Erl, T., Puttini, R., & Mahmood, Z. (2013). Cloud Computing: Concepts, Technology & Architecture. Prentice Hall.
Chapter 4, "Fundamental Cloud Architecture," Section 4.3, "Advanced Cloud Architectures": While a commercial book, its concepts are foundational and taught in university curricula. It describes the roles within cloud environments, detailing how developers (or development teams) are responsible for creating the software, components, and services that are deployed and run on the cloud platform. This creation process includes testing and validation.
Question 17
Show Answer
A. Object: Object storage is designed for unstructured data, such as images, videos, and log files, storing them as self-contained objects with metadata, not in a relational schema.
B. Unstructured: This is a category of data, not a storage implementation. While a database might store unstructured data within a field (e.g., a BLOB), the database system itself imposes structure.
C. Volume: Volume (or block) storage provides raw storage blocks to a server's operating system. A database runs on top of this but logically organizes the data in a structured manner, not as raw blocks.
1. Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., Lee, G., Patterson, D., Rabkin, A., Stoica, I., & Zaharia, M. (2010). A view of cloud computing. Communications of the ACM, 53(4), 50โ58. In Section 3, "Classes of Utility Computing," the paper discusses storage services, implicitly differentiating between structured storage systems like Amazon's Relational Database Service and unstructured object storage.
2. Stanford University, CS 145 Introduction to Databases. Course materials frequently define a Database Management System (DBMS) as a system for managing collections of structured data. The relational model, which is the basis for most common databases, is defined by its structured nature (tables, attributes, tuples). (See general course descriptions for CS145 at https://cs.stanford.edu/courses/cs145).
3. National Institute of Standards and Technology (NIST). (2011). NIST Cloud Computing Reference Architecture (NIST Special Publication 500-292). In Section 5.3.2, "Cloud Storage Service Types," the document describes different storage abstractions. While not using the exact term "structured storage," it describes database services as distinct from block or object storage, highlighting their role in managing organized data sets.
Question 18
Show Answer
A. Data context: This is the responsibility of the data owner or data steward, who understands the business meaning and classification of the data.
B. Data content: The data owner is ultimately accountable for the accuracy, integrity, and quality of the data content itself.
D. Logging access and alerts: This is a specific function that falls under the custodian's broader duty of implementing security controls, making it an incomplete description of the role.
1. Cloud Security Alliance (CSA). (2017). Security Guidance for Critical Areas of Focus in Cloud Computing v4.0. Page 31, Section "Data Governance Roles". The document explicitly states, "The data custodian is responsible for the safe custody, transport, and storage of the data and implementation of business rules."
2. Johns Hopkins University. (n.d.). Roles and Responsibilities. Johns Hopkins Data Governance. Retrieved from https://datagovernance.jhu.edu/roles-and-responsibilities/. The university's official data governance documentation defines the Data Custodian's role as being "responsible for the safe custody, transport, and storage of the data, as well as the implementation and management of the business rules."
3. National Institute of Standards and Technology (NIST). (2006). Special Publication 800-18 Revision 1: Guide for Developing Security Plans for Federal Information Systems. Section 3.3, "Security Roles and Responsibilities," page 15. This guide distinguishes between the Information Owner, who establishes rules for data, and operational roles (akin to custodians) responsible for the technical implementation and maintenance of the systems housing the data.
Question 19
Show Answer
A. Identifying roles such as data owner, controller and processor: This is a significant legal and contractual challenge, as the shared responsibility model can create ambiguity that complicates legal discovery obligations.
B. Decentralization of data storage: This is a major challenge because data may be fragmented and replicated across multiple unknown geographic locations, making comprehensive identification and collection extremely difficult.
D. Complexities of International law: This is often considered the most significant barrier to cloud eDiscovery, as data may be subject to conflicting jurisdictional laws, privacy regulations, and data sovereignty requirements.
---
1. NISTIR 8006, NIST Cloud Computing Forensic Science Challenges.
Page 11, Section 4.2, "Legal Challenges": This section states, "The legal challenges are perhaps the most daunting of the three categories of challenges [Architectural, Legal, Technical]..." This supports the reasoning that legal issues (Options A, D) and their consequences (Option B) are the most difficult, making the technical process (Option C) comparatively less so.
2. Al-Nemrat, A., & Al-Aqrabi, H. (2021). A Comprehensive Survey on Cloud Forensics Challenges and Future Research Directions. IEEE Access, 9, 20636-20659.
Page 20641, Section III-A, "Legal Challenges": The paper emphasizes that "Legal issues are the most significant challenges in cloud forensics," highlighting problems with jurisdiction, SLAs, and data ownership. This reinforces that legal complexities are a greater challenge than the technical analysis process. (DOI: https://doi.org/10.1109/ACCESS.2021.3051972)
3. Ruan, K. (2013). Cybercrime and Cloud Forensics: Applications for Investigation. IGI Global.
Chapter 5, "Cloud Forensics": This chapter details the profound impact of data distribution and multi-jurisdictional issues on investigations. It notes that "the biggest challenge for law enforcement is the issue of jurisdiction," which directly relates to the complexities of international law and decentralized storage being major obstacles.
Question 20
What is the Cloud Security Alliance Cloud Controls Matrix (CCM)?
Show Answer
A. The CCM is a comprehensive framework covering many security aspects, not just the software development life cycle.
B. The security domains within the CCM are presented as separate, distinct categories rather than being organized in a formal hierarchy.
D. The CCM is a control framework and a guidance tool used to meet compliance, not a set of legally binding regulatory requirements.
1. Cloud Security Alliance. (2021). Cloud Controls Matrix v4.0. "The CSA Cloud Controls Matrix (CCM) is a cybersecurity control framework for cloud computing... The CCM is composed of 197 control objectives that are structured in 17 domains covering all key aspects of cloud technology." (Introduction, p. 4). This source confirms the CCM is an inventory of controls structured into separate domains.
2. Mell, P., & Grance, T. (2011). The NIST Definition of Cloud Computing (NIST Special Publication 800-145). National Institute of Standards and Technology. While not defining the CCM, this foundational document establishes the context in which frameworks like the CCM operate. The CCM's structure as a set of controls aligns with the need to secure the service models defined by NIST, which requires a broad, domain-based approach rather than a single-focus or regulatory one.
3. Hashemi, S. Z., & Sharifi, A. (2019). A comprehensive review of security and privacy challenges in the cloud computing. Journal of Supercomputing, 75, 8194โ8221. This academic review discusses various cloud security frameworks, describing the CSA CCM as a "set of controls to manage cloud-specific security risks" and noting its structure is based on "security domains," which supports the concept of separate, organized control groups. (Section 4.1). https://doi.org/10.1007/s11227-019-02985-x
Question 21
Show Answer
A. KPI (Key Performance Indicator) measures performance against strategic goals, not directly the level of risk exposure.
C. SOC (Security Operations Center) is an organizational function responsible for security monitoring, not a metric.
D. SLA (Service Level Agreement) is a contractual document defining service levels, not a risk management metric itself.
---
1. ISACA. (2018). COBIT 2019 Framework: Governance and Management Objectives. The management objective APO12, "Manage Risk," explicitly details the practice of collecting data and monitoring Key Risk Indicators (KRIs) to identify and report on risk in a timely manner (Practice APO12.04).
2. National Institute of Standards and Technology (NIST). (n.d.). Glossary. Computer Security Resource Center. Retrieved from https://csrc.nist.gov/glossary. This resource provides formal definitions for Service Level Agreement (SLA) as a "formal agreement" and Security Operations Center (SOC) as a "centralized function," clarifying that they are not metrics.
3. Olson, D. L., & Wu, D. D. (2017). Enterprise Risk Management (3rd ed.). Springer. Chapter 3, "Risk Identification," discusses the role of KRIs as tools for identifying and monitoring emerging risks within an enterprise risk management framework.
4. Fraser, J., & Simkins, B. J. (2016). The challenges of and solutions for implementing enterprise risk management. Business Horizons, 59(6), 689-698. https://doi.org/10.1016/j.bushor.2016.06.007. The article discusses the implementation of ERM and highlights the importance of KRIs as "measures that signal a change in the level of risk" (p. 693).
Question 22
Show Answer
A. Audit rights of subcontractors are a due diligence control mechanism, often implemented contractually, not a direct, foundational component of the PII regulation itself.
B. "Items that should be implemented" is overly vague and non-specific; it does not describe a distinct, key component of a regulatory framework for PII.
C. PCI DSS is a specific information security standard for protecting payment card data, not a general component found across all PII regulations.
1. National Institute of Standards and Technology (NIST) Special Publication 800-122, Guide to Protecting the Confidentiality of Personally Identifiable Information (PII), Section 4.3, "Breach Notification Policies and Procedures," states, "Federal agencies are required to have and implement breach notification policies and procedures consistent with OMB Memorandum (M) 07-16." This establishes breach notification as a core requirement in a major regulatory context.
2. Regulation (EU) 2016/679 (General Data Protection Regulation - GDPR), Article 33, "Notification of a personal data breach to the supervisory authority," and Article 34, "Communication of a personal data breach to the data subject," explicitly mandate breach reporting as a legal obligation for controllers of PII.
3. U.S. Department of Health & Human Services, The HIPAA Breach Notification Rule, 45 CFR ยงยง 164.400-414. This rule requires HIPAA-covered entities and their business associates to provide notification following a breach of unsecured protected health information, demonstrating its centrality to this specific PII regulation.
4. Goel, S., & Shawky, D. (2020). Data Breach Harms. University of Illinois Journal of Law, Technology & Policy, 2020(1), 1-42. This academic article discusses the legal landscape of data breaches, stating, "In the United States, all fifty states, the District of Columbia, Guam, Puerto Rico, and the Virgin Islands have enacted data breach notification statutes that require private or governmental entities to notify individuals of security breaches of information involving personally identifiable information." (p. 4). This highlights the ubiquity of breach notification as a key regulatory component.
Question 23
Which of the following components are part of what a CCSP should review when looking at contracting with a cloud service provider?
Show Answer
A. Redundant uplink grafts: This is an overly specific technical detail. A CCSP reviews service level agreements (SLAs) for network availability, not the provider's proprietary hardware implementation choices.
C. The physical layout of the datacenter: This information is typically confidential and not shared with customers. Assurance of physical security is obtained through independent, third-party audit reports like SOC 2 or ISO 27001 certifications.
1. Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM) v4.0.5:
For Answer B: Control ID HRS-02 (Human Resources Screening) requires verification of an individual's background at the time of application. This is a standard control that a customer should review as part of their due diligence on a provider's personnel security.
For Answer D: Control ID IVS-02 (Third-Party Service Provider (Sub-processor) Relationship Management) mandates a process to manage and monitor third-party service providers to ensure information security policies are followed. This directly addresses the need to review the use of subcontractors.
Source: Cloud Security Alliance. (2021). Cloud Controls Matrix v4.0.5. Sections: HRS (Human Resources Security) and IVS (Interoperability & Portability).
2. NIST Special Publication 800-144, Guidelines on Security and Privacy in Public Cloud Computing:
For Answer D: Section 6.3, "Managing Supply Chain Risks," explicitly discusses that a cloud provider's service may depend on other providers. It states that the "cloud consumer should be aware of the supply chain and the associated risks" and that these dependencies should be understood as part of the contracting and due diligence process.
Source: Jansen, W., & Grance, T. (2011). Guidelines on Security and Privacy in Public Cloud Computing (NIST SP 800-144). Page 26. https://doi.org/10.6028/NIST.SP.800-144
3. ISO/IEC 27017:2015, Code of practice for information security controls based on ISO/IEC 27002 for cloud services:
For Answer D: Clause 15.1, "Information security in supplier relationships," provides guidance on managing security risks associated with the ICT supply chain, which directly applies to a CSP's use of subcontractors. The customer has a responsibility to understand and manage these extended risks.
Question 24
Show Answer
A. Transferring: This is a valid risk management strategy where the financial impact of a risk is shifted to another entity, commonly through insurance or outsourcing contracts.
B. Accepting: This is a valid risk management strategy where an organization makes a conscious and documented decision to retain the risk without implementing further controls.
C. Mitigating: This is a primary risk management strategy that involves implementing safeguards and countermeasures to reduce the likelihood or impact of a risk to an acceptable level.
1. National Institute of Standards and Technology (NIST). (2018). Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy (NIST Special Publication 800-37, Rev. 2). Section 2.5, "Risk Response," Page 17. The document states, "The appropriate risk response is determined based on the results of the risk assessment... Common risk responses include... accept, avoid, mitigate, share, or transfer."
2. National Institute of Standards and Technology (NIST). (2012). Guide for Conducting Risk Assessments (NIST Special Publication 800-30, Rev. 1). Section 3.3, "Risk Response," Page 26. This guide specifies the four courses of action for risk response: "(i) risk acceptance; (ii) risk avoidance; (iii) risk mitigation; or (iv) risk sharing or transfer."
3. International Organization for Standardization (ISO). (2022). ISO/IEC 27005:2022 Information security, cybersecurity and privacy protection โ Guidance on managing information security risks. Clause 8.5, "Information security risk treatment," lists the options for risk treatment as retaining (acceptance), avoiding, modifying (mitigation), and sharing (transfer).
4. Purdue University. (n.d.). CS 42600: Computer Security, Lecture 20 - Risk Management. Slide 13, "Risk Response Strategies." The lecture materials list the four primary risk response strategies as Avoidance, Transference, Mitigation, and Acceptance.
Question 25
Show Answer
A. eDiscovery: This is a legal process for which cloud forensic techniques are frequently used to identify, collect, and produce electronically stored information (ESI) from cloud environments.
B. Chain of custody: This is a fundamental and mandatory requirement in all forensic investigations, including cloud forensics, to document the handling of evidence and ensure its integrity and admissibility.
C. Analysis: This is a core and distinct phase in the digital forensics lifecycle where investigators interpret the collected data to draw conclusions and find evidence.
1. National Institute of Standards and Technology (NIST). (2006). Special Publication 800-86, Guide to Integrating Forensic Techniques into Incident Response. Section 3.1, "The Forensic Process," outlines the four main phases: Collection, Examination, Analysis, and Reporting. Section 3.2.2, "Documenting the Investigation," discusses the importance of the chain of custody.
2. Ruan, K. (Ed.). (2013). Cybercrime and Cloud Forensics: Applications for Investigation. IGI Global. Chapter 1, "An Overview of Cloud Forensics," discusses the standard forensic process and highlights the challenges of maintaining a chain of custody (p. 12) and the relationship with eDiscovery (p. 15) in the cloud. The term "plausibility" is not used to describe a formal part of the process.
3. Dykstra, J., & Sherman, A. T. (2012). Acquiring forensic evidence from infrastructure-as-a-service cloud computing: Exploring and evaluating tools, trust, and techniques. In Proceedings of the 2012 ACM workshop on Cloud computing security workshop (pp. 93-104). This paper discusses the forensic process in IaaS, including analysis and the complexities of the chain of custody, demonstrating their direct association with the field. (DOI: https://doi.org/10.1145/2381896.2381912)
Question 26
Which is the lowest level of the CSA STAR program?
Show Answer
A. Attestation: This is a Level 2 assurance activity, which involves a third-party audit (e.g., SOC 2) and is therefore a higher level of assurance than a self-assessment.
C. Hybridization: This term is not used to define a level of assurance within the official CSA STAR program framework.
D. Continuous monitoring: This is Level 3, the highest level of assurance in the STAR program, which requires automated, continuous reporting on security controls.
1. Cloud Security Alliance. (n.d.). CSA Security, Trust, Assurance and Risk (STAR) Program. Retrieved from https://cloudsecurityalliance.org/star/. The "Three Levels of Assurance" section explicitly defines Level 1 as "Self-Assessment," Level 2 as "Third-Party Audit," and Level 3 as "Continuous Monitoring."
2. Al-Issa, Y., Ottom, M. A., & Tamimi, A. A. (2019). A Comprehensive Study of Security and Privacy Frameworks for the Cloud Computing. IEEE Access, 7, 114476-114490. https://doi.org/10.1109/ACCESS.2019.2935584. In Section IV-A, "Cloud Security Alliance (CSA)," the paper describes the STAR program's three levels, identifying "Level 1: CSA STAR Self-Assessment" as the initial tier.
3. Carnegie Mellon University, Software Engineering Institute. (2019). Cloud Service Level Agreement (SLA) Metamodel and Lexicon (Report No. CMU/SEI-2019-TR-005). Retrieved from https://resources.sei.cmu.edu/assetfiles/TechnicalReport/2019005001545379.pdf. On page 11, the report references the CSA STAR program, noting that the "lowest level of assurance is a self-assessment."
Question 27
Show Answer
A. An SOC 3 report is too general to demonstrate specific HIPAA compliance, which requires a more detailed mapping of controls, often addressed in a SOC 2 + HIPAA report.
B. No audit or attestation engagement provides absolute assurance due to inherent limitations; they provide a high level of, but not absolute, reasonable assurance.
D. Compliance with the Payment Card Industry Data Security Standard (PCI/DSS) is demonstrated through its own specific attestation framework, such as a Report on Compliance (ROC).
1. American Institute of Certified Public Accountants (AICPA). "SOC 3ยฎโSOC for Service Organizations: Trust Services Criteria." The AICPA, the body that created the standard, explicitly states, "A SOC 3 report is a general use report, and is a great marketing tool... Because they are general use reports, SOC 3ยฎ reports can be freely distributed or posted on a website with a seal." This directly supports the concept of a "seal of approval" for marketing and public trust. (Retrieved from the official AICPA website on SOC reports).
2. American Institute of Certified Public Accountants (AICPA). Statement on Standards for Attestation Engagements (SSAE) No. 18, Attestation Standards: Clarification and Recodification. Section AT-C 105, "Concepts Common to All Attestation Engagements." This standard defines a "general-use report" (like SOC 3) as one that is not restricted to specified parties, contrasting it with a "restricted-use report" (like SOC 2). This distinction underpins the public-facing, marketing purpose of the SOC 3 report.
3. Tysiac, K. (2017). "SOC 2ยฎ reports are gaining traction." Journal of Accountancy. In this publication by the AICPA, the article explains the different uses of SOC reports, clarifying that "A SOC 3ยฎ report is a general-use report that can be distributed freely, for instance, as a marketing tool on a service organizationโs website." This reinforces its primary purpose as a public-facing attestation.
Question 28
Show Answer
A. Financial services are incorrect because this industry is heavily regulated by laws like the Gramm-Leach-Bliley Act (GLBA) and the Sarbanes-Oxley Act (SOX) to protect financial data.
B. Healthcare is incorrect as it is governed by strict regulations such as the Health Insurance Portability and Accountability Act (HIPAA) to protect sensitive patient health information.
C. Public companies are incorrect because they must adhere to rigorous financial reporting and corporate governance laws, most notably the Sarbanes-Oxley Act (SOX).
1. U.S. Department of Health & Human Services (HHS). The HIPAA Privacy Rule. The rule establishes national standards to protect individuals' medical records and other individually identifiable health information. This document exemplifies the high level of regulation in healthcare. (Available at hhs.gov, specific section: "Summary of the HIPAA Privacy Rule").
2. U.S. Securities and Exchange Commission (SEC). The Laws That Govern the Securities Industry. This resource outlines regulations for public companies, including the Sarbanes-Oxley Act of 2002, which introduced major changes to the regulation of corporate governance and financial practice. (Available at sec.gov, specific section on the Sarbanes-Oxley Act).
3. Federal Trade Commission (FTC). Gramm-Leach-Bliley Act. This official resource details the requirements for financial institutions to explain their information-sharing practices to their customers and to safeguard sensitive data, demonstrating the regulatory burden in financial services. (Available at ftc.gov, specific section: "FTCโs Privacy Rule and the GLB Act").
4. Jang-Jaccard, J., & Nepal, S. (2014). A survey of security and privacy issues in Cloud Computing. Journal of Network and Computer Applications, 40, 12-29. https://doi.org/10.1016/j.jnca.2013.11.015. This academic paper discusses compliance challenges in the cloud, frequently citing healthcare (HIPAA) and finance (PCI DSS, SOX) as examples of industries with stringent regulatory requirements for data handling (Section 4.2, "Data security and privacy issues").
Question 29
Which of the following methods of addressing risk is most associated with insurance?
Show Answer
A. Mitigation: This involves implementing controls to reduce the likelihood or impact of a risk, not shifting the financial burden to another entity.
C. Avoidance: This strategy involves ceasing or not starting an activity that would introduce the risk, which is fundamentally different from managing an existing risk via insurance.
D. Acceptance: This means the organization consciously decides to bear the full financial impact of a risk, which is the opposite of transferring it.
1. National Institute of Standards and Technology (NIST). (2018). Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy (NIST Special Publication 800-37, Revision 2). Section 2.5, Step 5: Implement, Page 21. The document lists risk responses, including "sharing/transferring risk to other organizations."
2. International Organization for Standardization (ISO). (2018). ISO/IEC 27005:2018 Information technology โ Security techniques โ Information security risk management. Section 8.4.2, "Risk treatment options," describes "risk sharing" as a treatment option, which includes transferring risk to other parties through mechanisms like insurance policies.
3. University of California, Berkeley, School of Information. (2020). INFO 253: Cybersecurity Risk Management, Lecture 2: Risk Management Frameworks. Slide 21, "Risk Treatment," explicitly defines Risk Transfer as "Shifting risk to another party, e.g., by purchasing insurance."
Question 30
Show Answer
A. ISO 27001 is an international standard for information security management. It is a framework of best practices, not a law, although it can be used to help meet legal requirements.
B. PCI DSS is a contractual requirement and an industry standard for protecting payment card data. While non-compliance has severe penalties, it is not a government-enacted law.
C. NIST 800-53r4 is a catalog of security and privacy controls for U.S. federal information systems. It is a government standard/guideline, not a law applicable to all entities.
1. National Institute of Standards and Technology (NIST). (2018). Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy (NIST Special Publication 800-37, Revision 2). Section 2.3, "Relationship Among Risk Management Framework, System Development Life Cycle, and Cybersecurity Framework," page 11. The document explains that the Risk Management Framework (RMF) process is driven by inputs from various sources, including "applicable U.S. laws, Executive Orders, directives, regulations, policies, [and] standards," which necessitates the implementation of controls to ensure compliance.
2. Jansen, W., & Grance, T. (2011). Guidelines on Security and Privacy in Public Cloud Computing (NIST Special Publication 800-144). Section 5.2.1, "Compliance," page 13. This section discusses the importance of "adhering to the external laws and regulations that a governmental entity is subject to," highlighting that compliance with legal and regulatory requirements is a key security and privacy concern in the cloud, which is addressed through controls.
3. Al-Nemrat, A., Al-Aqrabi, H., & Al-Zyoud, A. (2014). A survey on legal issues in cloud computing. In 2014 6th International Conference on Computer Science and Information Technology (CSIT) (pp. 111-117). IEEE. Section III, "Legal Issues in Cloud Computing," discusses how legal frameworks, such as data protection laws, impose requirements that must be met with specific controls by cloud providers and customers. (https://doi.org/10.1109/CSIT.2014.6805989)
Question 31
Show Answer
B. This describes a Cloud Provider (CSP), the entity that owns and operates the cloud infrastructure and makes services available.
C. This is too general. While a cloud carrier transports data, this option lacks the specific context of connecting cloud consumers and providers.
D. This describes an operational function of the Cloud Provider (CSP), who is responsible for service availability and maintenance.
1. National Institute of Standards and Technology (NIST). (September 2011). NIST Cloud Computing Reference Architecture (NIST Special Publication 500-292). Section 3.1, "Cloud Computing Actors," Page 9. The document states, "A cloud carrier is an intermediary that provides connectivity and transport of cloud services from Cloud Providers to Cloud Consumers."
2. Badger, L., Grance, T., Patt-Corner, R., & Voas, J. (July 2011). Cloud Computing Synopsis and Recommendations (NIST Special Publication 800-146). Section 3.1, "Actors," Page 5. This publication reinforces the roles, defining the cloud carrier as the provider of the "wire/transport" for data.
3. Carnegie Mellon University, School of Computer Science. (Fall 2020). 15-319/619 Cloud Computing Courseware, Lecture 2: Cloud Stack. Slide 13, "NIST Cloud Computing Reference Model." The course material explicitly presents and explains the NIST-defined actors, including the Cloud Carrier as the network transport provider.
Question 32
Gap analysis is performed for what reason?
Show Answer
B. This describes a financial audit, which is distinct from a security or compliance gap analysis focused on controls and frameworks.
C. Providing assurance is an outcome of successfully closing gaps and achieving compliance, not the direct purpose of the analysis itself.
D. This describes a control audit or assessment, which tests the operational effectiveness of existing controls, not the identification of missing ones.
1. ISACA. (2018). COBIT 2019 Implementation Guide. In Phase 4, "What needs to be done?", the guide explicitly describes conducting a gap analysis to identify the differences between the current and target state, which is then used to define projects to close those gaps. This process is a form of benchmarking against the COBIT framework.
2. National Institute of Standards and Technology (NIST). (2011). Special Publication 800-39, Managing Information Security Risk. The risk management process described, particularly the "Assess" step (Section 2.2, Figure 2-1), involves determining the current state of controls, which is then compared against the requirements established in the "Frame" step. This comparison is functionally a gap analysis used to benchmark against risk tolerance.
3. von Solms, S. H. (2005). Information Security Governance: COBIT. In Information Security: A Comprehensive Guide (pp. 1-13). Wiley. This academic text explains that a common starting point for implementing a framework like COBIT or ISO 17799 is a gap analysis to benchmark the organization's current security posture against the standard's requirements.
Question 33
Which of the following frameworks focuses specifically on design implementation and management?
Show Answer
B. ISO 27017: This is a code of practice providing implementation guidance for information security controls specific to cloud computing environments, not a general management framework.
C. NIST 800-92: This is a specific technical guide focused on computer security log management, detailing how to design and implement a logging infrastructure, not a broad management framework.
D. HIPAA: This is a United States law (Health Insurance Portability and Accountability Act) that mandates security and privacy rules for protected health information (PHI), not a framework.
---
1. International Organization for Standardization. (2009). ISO 31000:2009 Risk management โ Principles and guidelines. "Clause 4: Framework" details the components for the design, implementation, monitoring and review, and continual improvement of the framework for managing risk. The entire clause is dedicated to this lifecycle.
2. International Organization for Standardization. (2015). ISO/IEC 27017:2015 Information technology โ Security techniques โ Code of practice for information security controls based on ISO/IEC 27002 for cloud services. The scope statement (Clause 1) specifies that it "gives guidelines for information security controls applicable to the provision and use of cloud services."
3. Kent, K., & Souppaya, M. (2006). Guide to Computer Security Log Management (NIST Special Publication 800-92). National Institute of Standards and Technology. The abstract states, "This publication provides guidance on developing and implementing a log management infrastructure."
4. U.S. Department of Health & Human Services. (n.d.). Health Information Privacy. HHS.gov. The official site describes HIPAA as a federal law that established national standards to protect individuals' medical records and other individually identifiable health information. This defines it as a regulation, not a management framework.
Question 34
Show Answer
A. SSAE 16: This was the attestation standard that established the SOC 1 report, but it is not the report itself. It was also superseded by SSAE 18 in 2017.
B. SOC 2: This report focuses on controls related to the Trust Services Criteria (Security, Availability, Processing Integrity, Confidentiality, and Privacy), which pertain to operational and security matters, not financial reporting.
D. SOC 3: This is a general-use, high-level summary of a SOC 2 report. It provides less detail and is not suitable for the specific needs of a financial audit.
1. American Institute of Certified Public Accountants (AICPA). (2022). SOC 1ยฎโSOC for Service Organizations: ICFR. "These reports are specifically intended to meet the needs of entities that use service organizations (user entities) and the CPAs that audit the user entitiesโ financial statements (user auditors), in evaluating the effect of the controls at the service organization on the user entitiesโ financial statements." Retrieved from the official AICPA website on SOC for Service Organizations.
2. Carnegie Mellon University, Information Security Office. (n.d.). SOC Reports. "A SOC 1 report is designed to address internal controls over financial reporting while a SOC 2 report addresses a service organizationโs controls that are relevant to their operations and compliance." Retrieved from the CMU Information Security Office website.
3. Lin, C., & Wang, L. (2017). The Impact of Service Organization Control Reports on the User Auditorโs Evaluation of the User Entityโs Internal Controls. Auditing: A Journal of Practice & Theory, 36(4), 119โ137. Section: "SOC 1 Reports", p. 121. "A SOC 1 report is prepared in accordance with AT Section 801 and focuses on a service organization's controls that are relevant to a user entity's internal control over financial reporting (ICFR)." https://doi.org/10.2308/ajpt-51749
Question 35
Show Answer
A. COBIT: Incorrect. COBIT is a widely adopted framework for IT governance and management that includes a dedicated process domain for managing risk (e.g., APO12 Manage Risk in COBIT 5).
C. ISO 31000:2009: Incorrect. This is the international standard for risk management, providing principles and guidelines. Although superseded by the 2018 version, it is a valid risk management framework.
D. NIST SP 800-37: Incorrect. This National Institute of Standards and Technology Special Publication explicitly defines the Risk Management Framework (RMF) for U.S. federal information systems and organizations.
1. National Institute of Standards and Technology (NIST). (2018). Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy (Special Publication 800-37, Rev. 2). U.S. Department of Commerce. Page 1, Section 1.1, states, "This publication describes the Risk Management Framework (RMF)..."
2. International Organization for Standardization. (2018). ISO 31000:2018(en) Risk management โ Guidelines. Foreword. "ISO 31000:2018 provides guidelines on managing risk faced by organizations."
3. ISACA. (2012). COBIT 5: A Business Framework for the Governance and Management of Enterprise IT. Page 39, Figure 16, lists the "APO12 Manage risk" process as a key part of the "Align, Plan and Organise" domain.
4. Fenz, S., & Ekelhart, A. (2011). Formalizing Information Security Knowledge. In Proceedings of the 4th International Conference on Theory and Practice of Electronic Governance (ICEGOV '10). Association for Computing Machinery, New York, NY, USA, 183โ192. https://doi.org/10.1145/1930321.1930356. (This academic paper discusses and compares various standards, including ISO 27005, NIST SP 800-30, and COBIT's risk-related processes, establishing them as valid frameworks).