Free Practice Test

Free CISSP Practice Test – 2025 Updated

Get ready for your CISSP exam with our free, accurate, and 2025-updated questions.

Cert Empire is committed to providing the best and latest exam questions for those preparing for the ISC2 CISSP exam. To assist students, we’ve made some of our CISSP exam prep resources free. You can get plenty of practice with our Free CISSP Practice Test.

Question 1

In a multi-tenant cloud environment, what approach will secure logical access to assets?
Options
A: Hybrid cloud
B: Transparency/Auditability of administrative access
C: Controlled configuration management (CM)
D: Virtual private cloud (VPC)
Show Answer
Correct Answer:
Virtual private cloud (VPC)
Explanation
A Virtual Private Cloud (VPC) is a fundamental security approach for achieving logical isolation in a multi-tenant cloud environment. It allows an organization to provision a logically segregated section of a public cloud, creating a private network space. Within this VPC, the organization can define its own IP address ranges, subnets, route tables, and network gateways. This effectively creates a virtual network boundary that isolates the tenant's assets from those of other tenants, even though they may reside on the same physical hardware. This logical segregation is the primary method for securing logical access and preventing cross-tenant data exposure in an Infrastructure as a Service (IaaS) model.
References

1. Cloud Security Alliance. (2017). Security Guidance for Critical Areas of Focus in Cloud Computing v4.0. Domain 7: Infrastructure Security, Section 7.2, p. 89. The document states, "The virtual network provides logical isolation... This allows customers to segment their resources, not just from other customers, but also from their own resources."

2. National Institute of Standards and Technology. (2011). NIST Special Publication 500-292: NIST Cloud Computing Reference Architecture. Section 5.3.1.2, "Resource Pooling & Multi-tenancy," p. 17. This section discusses how multi-tenancy requires logical isolation of shared resources, which is the problem that VPCs are designed to solve.

3. Amazon Web Services. (2023). What is Amazon VPC?. AWS Documentation. The official documentation defines a VPC as a service that "lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define."

4. Armbrust, M., et al. (2009). Above the Clouds: A Berkeley View of Cloud Computing. University of California, Berkeley, Technical Report No. UCB/EECS-2009-28. Section 4, "Top 10 Obstacles and Opportunities for Cloud Computing," p. 8. The report discusses the obstacle of "Data Confidentiality and Auditability," for which network and machine-level isolation (as provided by a VPC) is a key solution.

Question 2

A company hired an external vendor to perform a penetration test ofa new payroll system. The companyโ€™s internal test team had already performed an in-depth application and security test of the system and determined that it met security requirements. However, the external vendor uncovered significant security weaknesses where sensitive personal data was being sent unencrypted to the tax processing systems. What is the MOST likely cause of the security issues?
Options
A: Failure to perform interface testing
B: Failure to perform negative testing
C: Inadequate performance testing
D: Inadequate application level testing
Show Answer
Correct Answer:
Failure to perform interface testing
Explanation
The vulnerability was discovered in the data transmission between the new payroll system and the external tax processing system. This points to a failure in testing the communication link, or interface, between these two distinct systems. Interface testing is specifically designed to verify that data is exchanged correctly and securely between different software components or systems. The internal team likely focused on the application's internal functions and security, but overlooked the security of the data in transit to an external entity, which is the primary goal of interface testing.
References

1. National Institute of Standards and Technology (NIST). (2008). Special Publication 800-115, Technical Guide to Information Security Testing and Assessment.

Reference: Section 3.5, "Application Security Testing," discusses the need to test all components of an application, including its interfaces with other systems. It notes that security testing should "verify that the application properly enforces security for both valid and invalid operations" and that this includes how it communicates with other services. The described scenario is a failure in this specific area.

2. Saltzer, J. H., & Schroeder, M. D. (1975). The Protection of Information in Computer Systems. Proceedings of the IEEE, 63(9), 1278โ€“1308.

Reference: Section I.A.3, "Principle of Least Privilege," and Section I.A.5, "Principle of Complete Mediation." While not a direct definition of interface testing, these foundational security principles, taught in university curricula, imply that every access and data exchange between systems (an interface) must be validated. The failure to encrypt data at the interface violates the principle of protecting data as it crosses trust boundaries. (DOI: https://doi.org/10.1109/PROC.1975.9939)

3. University of Toronto, Department of Computer Science. (2018). CSC301: Introduction to Software Engineering, Lecture 11 - Software Testing.

Reference: Slide 21, "Integration Testing." The lecture material defines integration testing as testing the interfaces between components. It distinguishes between "Big Bang" and incremental approaches. This academic source establishes that testing interfaces between system components is a distinct and critical phase of software testing. The scenario highlights a failure in this specific phase.

Question 3

Which of the following is the MOST effective method of detecting vulnerabilities in web-based applications early in the secure Software Development Life Cycle (SDLC)?
Options
A: Web application vulnerability scanning
B: Application fuzzing
C: Code review
D: Penetration testing
Show Answer
Correct Answer:
Code review
Explanation
Code review, which includes both manual inspection and automated Static Application Security Testing (SAST), is the most effective method for detecting vulnerabilities early in the SDLC. It is performed during the development/implementation phase directly on the source code before the application is compiled or deployed. This "shift left" approach allows developers to identify and remediate security flaws, such as injection vulnerabilities or improper error handling, at the earliest and least expensive point in the lifecycle. The other options are dynamic testing methods that require a running application, placing them later in the SDLC.
References

1. ISC2 CISSP Official Study Guide (9th ed.). (2021). Chapter 21: Secure Software Development. pp. 898-899. The text explicitly places code review and static code analysis within the "Software Development and Coding" phase, emphasizing its role in early detection before testing begins.

2. NIST Special Publication 800-218. (Feb 2022). Secure Software Development Framework (SSDF) Version 1.1: Recommendations for Mitigating the Risk of Software Vulnerabilities. Section 4, Practice PW.5. This practice, "Review All Code," states, "The software producer reviews all code to identify vulnerabilities and verify compliance with security requirements... This can be accomplished through manual and/or automated means." This is a core practice applied to the code artifact itself.

3. OWASP Foundation. (2021). OWASP Software Assurance Maturity Model (SAMM) v2.0. Design - Security Testing, Stream B: Application Testing. The model shows Static Application Security Testing (SAST), an automated form of code review, as a foundational activity that can be integrated directly into the CI/CD pipeline during the build process, far earlier than dynamic testing or penetration testing.

4. Kissel, R., Stine, K., et al. (Oct 2008). NIST Special Publication 800-115: Technical Guide to Information Security Testing and Assessment. Section 5-2. The document distinguishes between code review (a static analysis technique) and security testing techniques like penetration testing and vulnerability scanning, which require an operational system.

Question 4

A malicious user gains access to unprotected directories on a web server. Which of the following is MOST likely the cause for this information disclosure?
Options
A: Security misconfiguration
B: Cross-site request forgery (CSRF)
C: Structured Query Language injection (SQLi)
D: Broken authentication management
Show Answer
Correct Answer:
Security misconfiguration
Explanation
Security misconfiguration is the most likely cause. This vulnerability category encompasses failures to implement all appropriate security controls for a server or web application, or the incorrect configuration of those controls. An "unprotected directory" is a classic example, where the web server is misconfigured to allow directory listing or has improper file system permissions, leading to unauthorized access and information disclosure. This is a direct failure in securing the server's configuration, rather than a flaw in application logic or authentication mechanisms.
References

1. OWASP Foundation. (2021). OWASP Top 10:2021. A05:2021-Security Misconfiguration. The description explicitly includes "directory listing is not disabled on the server" as a common example of this vulnerability. (Reference: owasp.org/Top10/A052021-SecurityMisconfiguration/)

2. National Institute of Standards and Technology (NIST). (2020). Security and Privacy Controls for Information Systems and Organizations (NIST Special Publication 800-53, Revision 5). Control CM-7 "Least Functionality" requires that the organization "configures the information system to provide only essential capabilities," which includes disabling functions like directory listing. A failure to do so is a configuration management failure. (Page 138, Control CM-7).

3. Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in Computing (5th ed.). Pearson Education. Chapter 8, "Web Security," discusses how improper server configuration is a primary source of web vulnerabilities, distinct from injection attacks or authentication flaws. (Section 8.3, "Web Server Vulnerabilities").

Question 5

Which of the following security objectives for industrial control systems (ICS) can be adapted to securing any Internet of Things (IoT) system?
Options
A: Prevent unauthorized modification of data.
B: Restore the system after an incident.
C: Detect security events and incidents.
D: Protect individual components from exploitation
Show Answer
Correct Answer:
Protect individual components from exploitation
Explanation
While all listed options are valid security objectives for both Industrial Control Systems (ICS) and Internet of Things (IoT) systems, protecting individual components is the most foundational and universally adaptable principle. The nature of IoT involves a massive number of distributed, often physically accessible, and resource-constrained devices. The security of the entire IoT ecosystem fundamentally relies on the security of each individual component (the "thing"). If a component is exploited, higher-level objectives like data integrity, system restoration, and event detection are compromised. This principle is directly inherited from ICS security, where protecting individual controllers (e.g., PLCs, RTUs) is a critical objective.
References

1. National Institute of Standards and Technology (NIST) Special Publication 800-82 Rev. 2, Guide to Industrial Control Systems (ICS) Security. Section 3.2, "ICS Security Program Development," outlines recommended security controls. Control family System and Information Integrity (SI), specifically SI-7 "Software, Firmware, and Information Integrity," and the general principle of defense-in-depth emphasize protecting individual system components from unauthorized changes.

2. National Institute of Standards and Technology (NIST) Internal Report (NISTIR) 8259A, IoT Device Cybersecurity Capability Core Baseline. This document establishes a baseline of security capabilities for IoT devices. The capabilities listed, such as Device Identification (Section 3.1), Device Configuration (Section 3.2), and Software Update (Section 3.5), are all focused on securing and managing the individual component to protect it from exploitation.

3. Al-Garadi, M. A., Mohamed, A., Al-Ali, A. K., Du, X., Ali, I., & Guizani, M. (2020). A Survey of Machine and Deep Learning Methods for Internet of Things (IoT) Security. IEEE Communications Surveys & Tutorials, 22(3), 1646-1685. DOI: 10.1109/COMST.2020.2988293. This survey discusses the convergence of security challenges in ICS and IoT, noting that "the first line of defense for IoT systems is to secure the IoT devices themselves" (Section II.A). This highlights the foundational importance of component-level protection.

Question 6

Wi-Fi Protected Access 2 (WPA2) provides users with a higher level of assurance that their data will remain protected by using which protocol?
Options
A: Secure Shell (SSH)
B: Internet Protocol Security (IPsec)
C: Secure Sockets Layer (SSL)
D: Extensible Authentication Protocol (EAP)
Show Answer
Correct Answer:
Extensible Authentication Protocol (EAP)
Explanation
Wi-Fi Protected Access 2 (WPA2), in its more secure Enterprise mode, implements the IEEE 802.1X standard for port-based network access control. This standard utilizes the Extensible Authentication Protocol (EAP) as its authentication framework. EAP provides a standardized transport for authentication messages between a client device (supplicant), the wireless access point (authenticator), and a central authentication server (e.g., RADIUS). This architecture allows for the use of various strong authentication methods, such as certificates (EAP-TLS) or credentials, providing a significantly higher level of assurance and centralized user management compared to the pre-shared key (PSK) model.
References

1. National Institute of Standards and Technology (NIST) Special Publication 800-97, Establishing Wireless Robust Security Networks: A Guide to IEEE 802.11i, February 2007. Section 3.1, "IEEE 802.1X Port-Based Access Control," states, "IEEE 802.1X uses the Extensible Authentication Protocol (EAP) [RFC 3748] to exchange authentication messages between the supplicant and the authentication server."

2. IEEE Std 802.11โ„ข-2020, IEEE Standard for Information Technology--Telecommunications and Information Exchange between Systems Local and Metropolitan Area Networks--Specific Requirements - Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications. Clause 12.7.2, "AKM suite selector definitions," defines Authentication and Key Management (AKM) suites, including those based on IEEE 802.1X, which is the mechanism that employs EAP.

3. Carnegie Mellon University, Software Engineering Institute (SEI), Securely Deploying 802.11 Wireless Networks with Microsoft Windows, January 2009. Page 11, Section 3.2.2, "WPA2-Enterprise," states, "WPA2-Enterprise uses 802.1X/EAP for authentication. With 802.1X/EAP, a user must authenticate to the network before being granted access."

Question 7

A software development company has a short timeline in which to deliver a software product. The software development team decides to use open-source software libraries to reduce the development time. What concept should software developers consider when using open-source software libraries?
Options
A: Open source libraries contain known vulnerabilities, and adversaries regularly exploit those vulnerabilities in the wild.
B: Open source libraries can be used by everyone, and there is a common understanding that the vulnerabilities in these libraries will not be exploited.
C: Open source libraries are constantly updated, making it unlikely that a vulnerability exists for an adversary to exploit.
D: Open source libraries contain unknown vulnerabilities, so they should not be used.
Show Answer
Correct Answer:
Open source libraries contain known vulnerabilities, and adversaries regularly exploit those vulnerabilities in the wild.
Explanation
The primary security concern when incorporating open-source software (OSS) is managing the risk of inherited vulnerabilities. OSS components, like any software, can contain flaws. Because these libraries are widely used, a single discovered vulnerability can affect thousands of applications, making them a high-value target for adversaries. Security frameworks like the OWASP Top 10 specifically highlight "Vulnerable and Outdated Components" as a critical risk. Therefore, developers must implement processes, such as Software Composition Analysis (SCA), to identify, track, and remediate known vulnerabilities in the third-party libraries they use.
References

1. National Institute of Standards and Technology (NIST). (2022). Secure Software Development Framework (SSDF) Version 1.1: Recommendations for Mitigating the Risk of Software Vulnerabilities (NIST Special Publication 800-218).

Section/Page: Practice PW.5, "Acquire and use only securely developed third-party components." Page 13 states, "A component with known vulnerabilities could be exploited by attackers to compromise the software, so it is important to know which components are used in the software and which vulnerabilities have been identified in those components."

2. OWASP Foundation. (2021). OWASP Top 10:2021.

Section/Page: A06:2021 โ€“ Vulnerable and Outdated Components. The document states, "You are likely vulnerable... If you do not know the versions of all components you use (both client-side and server-side). This includes components you directly use as well as nested dependencies... If you do not scan for vulnerabilities regularly and subscribe to security bulletins related to the components you use." This directly supports the idea that known vulnerabilities in components are a major risk.

3. Healy, J. C., & Mylopoulos, J. (2002). Requirements and Early-Phase Software Engineering. In van der Hoek, A. (Ed.), University of California, Irvine, Informatics 125 course materials.

Section/Page: In discussions on Non-Functional Requirements (NFRs) for security, course materials often reference the need to manage dependencies. The principle is that using third-party components, including open-source, means inheriting their security posture. The system's security is dependent on the security of its weakest component, which could be an unpatched open-source library. This is a foundational concept in secure software engineering taught in university curricula.

Question 8

According to the (ISC)? ethics canon โ€œact honorably, honestly, justly, responsibly, and legally," which order should be used when resolving conflicts?
Options
A: Public safety and duties to principals, individuals, and the profession
B: Individuals, the profession, and public safety and duties to principals
C: Individuals, public safety and duties to principals, and the profession
D: The profession, public safety and duties to principals, and individuals
Show Answer
Correct Answer:
Public safety and duties to principals, individuals, and the profession
Explanation
The Preamble to the (ISC)ยฒ Code of Ethics establishes a clear order of priority for resolving conflicts among the four canons. The first and most important canon is to "Protect society, the common good, necessary public trust and confidence, and the infrastructure." This principle, broadly defined as public safety, takes precedence over all other obligations. Duties to principals (employers/clients) and the profession follow in priority. Therefore, when a conflict arises, the professional's primary duty is to the public, followed by their principal, and finally to the profession itself. Option A is the only choice that reflects this mandated hierarchy.
References

1. (ISC)ยฒ. (2024). ISC2 Code of Ethics. Preamble. The document states, "The canons, in the order of their priority, are: 1. Protect society... 2. Act honorably... 3. Provide diligent and competent service to principals. 4. Advance and protect the profession." It further clarifies, "Therefore, any conflict between these canons should be resolved in the order of the canons."

2. Stewart, J. M., Chapple, M., & Gibson, D. (2021). Official (ISC)2 CISSP CBK Reference (6th ed.). Sybex. In Domain 1: Security and Risk Management, the section "Understand, Adhere to, and Promote Professional Ethics" explicitly discusses the hierarchy of the canons, emphasizing that the duty to protect society (the first canon) is paramount.

3. HHS.gov, Office for Human Research Protections. (n.d.). The Belmont Report. While not an (ISC)ยฒ source, this foundational U.S. government document on ethics in research establishes the principle of beneficence (do no harm, maximize benefits), which aligns with the CISSP ethic of prioritizing public safety above other concerns. This principle is a cornerstone of ethical frameworks taught in university-level programs. (Section C: Applications, Paragraph 1).

Question 9

When conducting a remote access session using Internet Protocol Security (IPSec), which Open Systems Interconnection (OSI) model layer does this connection use?
Options
A: Transport
B: Network
C: Data link
D: Presentation
Show Answer
Correct Answer:
Network
Explanation
Internet Protocol Security (IPSec) is a protocol suite designed to secure Internet Protocol (IP) communications. It operates at the Network Layer (Layer 3) of the Open Systems Interconnection (OSI) model, the same layer as IP itself. IPSec functions by authenticating and/or encrypting each IP packet in a data stream, adding its own security headers (Authentication Header - AH, or Encapsulating Security Payload - ESP) to the original IP packet. This process is transparent to the upper layers (Transport, Application), which are unaware that the underlying communication is being secured at the network level.
References

1. National Institute of Standards and Technology (NIST) Special Publication 800-77, Guide to IPsec VPNs. Section 2.1, "IPsec Overview," states: "IPsec is a suite of protocols for securing IP communications at the network layer by authenticating and/or encrypting each IP packet in a data stream."

2. Internet Engineering Task Force (IETF) RFC 4301, Security Architecture for the Internet Protocol. Section 1.1, "Security Services," states: "IPsec is designed to provide security services at the IP layer, enabling it to protect a variety of higher-level protocols..." The IP layer corresponds to the Network Layer of the OSI model.

3. Kurose, J. F., & Ross, K. W. (2017). Computer Networking: A Top-Down Approach (7th ed.). Chapter 8, "Security in Computer Networks," explicitly categorizes IPSec as a network-layer security protocol in Section 8.7, "Network-Layer Security: IPsec and Virtual Private Networks." This is a standard textbook in university computer science curricula.

Question 10

Which of the following types of web-based attack is happening when an attacker is able to send a well-crafted, malicious request to an authenticated user without the user realizing it?
Options
A: ross-Site Scripting (XSS)
B: Cross-Site request forgery (CSRF)
C: Cross injection
D: Broken Authentication And Session Management
Show Answer
Correct Answer:
Cross-Site request forgery (CSRF)
Explanation
Cross-Site Request Forgery (CSRF) is an attack that tricks an authenticated user's browser into submitting a forged, malicious request to a trusted website. The web application processes this request because it is accompanied by the user's valid session credentials (e.g., cookies), thus performing an action on behalf of the user without their consent or knowledge. The attack's success relies on the user having an active session with the vulnerable application, and the application's inability to distinguish a legitimate request from a forged one initiated by a different site.
References

1. The Open Web Application Security Project (OWASP). (n.d.). Cross-Site Request Forgery (CSRF). OWASP Cheat Sheet Series. Retrieved from https://cheatsheetseries.owasp.org/cheatsheets/Cross-SiteRequestForgeryPreventionCheatSheet.html. In the introduction, it defines CSRF as "an attack that forces an end user to execute unwanted actions on a web application in which theyโ€™re currently authenticated."

2. Zeldovich, N., & Kaashoek, F. (2014). 6.858 Computer Systems Security, Fall 2014 - Lecture 16: Web security. MIT OpenCourseWare. Retrieved from https://ocw.mit.edu/courses/6-858-computer-systems-security-fall-2014/resources/mit6858f14lec16/. Slide 21 defines CSRF: "Malicious web site causes userโ€™s browser to send a request to an honest site, using the userโ€™s credentials (cookies) for that honest site."

3. Johns, M. (2008). Breaking the Web's Cookie Jar: Cross-Site Request Forgery and its mitigation. In Sicherheit 2008: Sicherheit, Schutz und Zuverlรคssigkeit. Lecture Notes in Informatics (LNI), P-128. Page 231. This academic paper states, "Cross-Site Request Forgery (CSRF) is a form of attack where a web site, email, or program causes a userโ€™s web browser to perform an unwanted action on a trusted site."

Question 11

When reviewing the security logs, the password shown for an administrative login event was ' OR ' '1'='1' --. This is an example of which of the following kinds of attack?
Options
A: Brute Force Attack
B: Structured Query Language (SQL) Injection
C: Cross-Site Scripting (XSS)
D: Rainbow Table Attack
Show Answer
Correct Answer:
Structured Query Language (SQL) Injection
Explanation
The string ' OR '1'='1' -- is a classic example of a tautology-based SQL Injection (SQLi) attack. This payload is crafted to manipulate a backend SQL query that validates user credentials. When injected into a password field, it alters the query's logic. The '1'='1' condition is a tautology (always true), and the OR operator makes the entire conditional statement true, regardless of the actual password. The -- sequence acts as a comment in SQL, nullifying the rest of the original query and preventing syntax errors. This effectively bypasses the authentication mechanism.
References

1. Boneh, D., & Grossman, D. (2011). CS 155: Computer and Network Security, Lecture 5: Web Security. Stanford University. The lecture notes describe SQL injection, using the ' OR 1=1 -- payload as a canonical example of an attack that bypasses authentication by creating a tautology in the SQL WHERE clause. (See slides on "SQL Injection").

2. Halfond, W. G., Viegas, J., & Orso, A. (2006). A classification of SQL-injection attacks and countermeasures. Proceedings of the International Symposium on Secure Software Engineering. In Section 2.1, "Tautologies," the paper explicitly identifies payloads like ' or '1'='1 as a primary technique for bypassing authentication by making the where clause of a query always evaluate to true. DOI: https://doi.org/10.1109/ISSSE.2006.241671

3. Zeldovich, N., & Kaashoek, F. (2014). 6.858 Computer Systems Security, Lecture 10: Web Security. MIT OpenCourseWare. The lecture materials detail how user input can be misinterpreted as SQL commands, providing examples similar to ' OR '1'='1' to illustrate how an attacker can manipulate the query to bypass password checks.

Question 12

An organization's internal audit team performed a security audit on the company's system and reported that the manufacturing application is rarely updated along with other issues categorized as minor. Six months later, an external audit team reviewed the same system with the same scope, but identified severe weaknesses in the manufacturing application's security controls. What is MOST likely to be the root cause of the internal audit team's failure in detecting these security issues?
Options
A: Inadequate test coverage analysis
B: Inadequate security patch testing
C: Inadequate log reviews
D: Inadequate change control procedures
Show Answer
Correct Answer:
Inadequate test coverage analysis
Explanation
The significant difference in findings between the internal and external audits, despite an identical scope, strongly indicates a disparity in the thoroughness and comprehensiveness of the assessment methodologies. The internal audit identified a symptom (infrequent updates) but missed the "severe weaknesses." This suggests their testing did not cover the specific security controls or attack vectors that would have revealed these critical vulnerabilities. Inadequate test coverage means the audit plan and execution were not sufficient to provide a complete picture of the application's security posture, a failure the external audit's more rigorous approach corrected.
References

1. (ISC)ยฒ CISSP Certified Information Systems Security Professional Official Study Guide, 9th Edition. Chapter 17, "Conducting Security Control Assessments," emphasizes that the selection and development of assessment procedures must be sufficient to produce the evidence needed to determine control effectiveness. A failure to find severe weaknesses implies the procedures used lacked the necessary coverage.

2. NIST Special Publication 800-115, Technical Guide to Information Security Testing and Assessment. Section 3.1, "Planning," states, "The planning phase is the most critical... It is during this phase that the rules of engagement are established, and the overall testing methodology is determined." This highlights that the effectiveness of an audit is contingent on a well-planned methodology that ensures comprehensive coverage.

3. NIST Special Publication 800-53A, Revision 4, Assessing Security and Privacy Controls in Federal Information Systems and Organizations. The introduction discusses the importance of selecting appropriate assessment methods and objects to obtain the required "depth and coverage" for a complete and accurate determination of control effectiveness. The scenario describes a clear failure in achieving adequate depth and coverage.

Question 13

Which audit type is MOST appropriate for evaluating the effectiveness of a security program?
Options
A: Threat
B: Assessment
C: Analysis
D: Validation
Show Answer
Correct Answer:
Assessment
Explanation
A security assessment is the most appropriate and comprehensive method for evaluating the effectiveness of a security program. It is a formal process to determine if security controls are implemented correctly, operating as intended, and producing the desired outcome with respect to meeting security requirements. An assessment provides a holistic view of the organization's security posture by systematically testing and evaluating administrative, technical, and physical controls. This process directly measures the overall effectiveness of the program against its stated objectives and established standards.
References

1. NIST Special Publication 800-53A, Revision 5, Assessing Security and Privacy Controls in Information Systems and Organizations. (December 2020). Page 1, Section 1, "Introduction". The document states, "This publication provides a methodology and a set of procedures for conducting assessments of security and privacy controls... to determine if the controls are implemented correctly, operating as intended, and producing the desired outcome with respect to meeting the security and privacy requirements..."

2. NIST Special Publication 800-37, Revision 2, Risk Management Framework for Information Systems and Organizations. (December 2018). Page 10, Section 2.4, "Step 4: Assess". This section defines the purpose of the assess step as determining "if the controls selected for implementation are implemented correctly, operating as intended, and producing the desired outcome with respect to meeting the security and privacy requirements for the system and the organization."

3. Carnegie Mellon University, Software Engineering Institute, CERT Resilience Management Model (CERT-RMM) v1.2. (May 2016). Page 13, Section 2.3, "Appraisal". The document describes an appraisal (an assessment) as a method to "determine the process and practice capabilities of an organizationโ€™s operational resilience management system," which is analogous to evaluating a security program's effectiveness.

Question 14

The development team has been tasked with collecting data from biometric devices. The application will support a variety of collection data streams. During the testing phase, the team utilizes data from an old production database in a secure testing environment. What principle has the team taken into consideration?
Options
A: biometric data cannot be changed.
B: Separate biometric data streams require increased security.
C: The biometric devices are unknown.
D: Biometric data must be protected from disclosure.
Show Answer
Correct Answer:
Biometric data must be protected from disclosure.
Explanation
Biometric data is considered highly sensitive Personally Identifiable Information (PII). Its compromise can have severe and permanent consequences for an individual. The team's decision to use a "secure testing environment" when handling even old production data demonstrates their adherence to the fundamental security principle of data protection. This action is a direct control implemented to uphold the confidentiality of the biometric data and prevent its unauthorized disclosure during the software development lifecycle (SDLC).
References

1. NIST Special Publication 800-53 Revision 5, Security and Privacy Controls for Information Systems and Organizations. Control SI-12, "Information Handling and Retention," mandates that organizations handle and protect information commensurate with its security category and sensitivity throughout its lifecycle, including in non-production environments.

2. ISO/IEC 27002:2022, Information security, cybersecurity and privacy protection โ€” Information security controls. Control 8.32, "Protection of test data," states, "Test data should be selected, protected and controlled carefully." It explicitly notes the risks of using operational data and the need for protective measures if its use is unavoidable.

3. (ISC)ยฒ CISSP Official Study Guide, 9th Edition. Domain 8: Software Development Security, discusses secure software testing. It emphasizes the significant risk of using production data in test environments and states that if it must be used, the test environment must have security controls equivalent to the production environment to prevent data disclosure. (Chapter 21, "Securing the Software Development Life Cycle").

4. Tipton, H. F., & Krause, M. (Eds.). (2007). Information Security Management Handbook, Sixth Edition. Auerbach Publications. In the chapter on "Application Security," the handbook discusses the sanitization of data for testing environments, highlighting that if production data is used, the environment must be secured to prevent the disclosure of sensitive information. (Part 5, Chapter 67).

Question 15

An attacker has intruded into the source code management system and is able to download but not modify the code. Which of the following aspects of the code theft has the HIGHEST security impact?
Options
A: The attacker could publicly share confidential comments found in the stolen code.
B: Competitors might be able to steal the organization's ideas by looking at the stolen code.
C: A competitor could run their own copy of the organization's website using the stolen code.
D: Administrative credentials or keys hard-coded within the stolen code could be used to access sensitive data.
Show Answer
Correct Answer:
Administrative credentials or keys hard-coded within the stolen code could be used to access sensitive data.
Explanation
The highest security impact stems from the discovery of hard-coded administrative credentials or cryptographic keys within the source code. This type of vulnerability provides an attacker with a direct and immediate path to escalate privileges, bypass authentication controls, and gain unauthorized access to sensitive production systems, databases, or cloud services. The consequence is a potentially catastrophic breach of confidentiality and integrity, far exceeding the impact of the other options.
References

1. National Institute of Standards and Technology (NIST) Special Publication 800-53 Revision 5, Security and Privacy Controls for Information Systems and Organizations. Control CM-6 (Configuration Settings) and its supplemental guidance emphasize that embedding credentials in software components is a significant vulnerability. The control's discussion notes the importance of managing configuration settings, including secrets, separately from the code to prevent unauthorized access.

2. Meli, M., McNiece, M., & Reaves, B. (2019). How to Break a Production System with a Single Line of Code: A Study of Hard-coded Secrets in the Wild. In Proceedings of the Internet Measurement Conference (IMC '19). Association for Computing Machinery, New York, NY, USA, 17โ€“23. This study empirically demonstrates the prevalence and high risk of hard-coded secrets, stating, "hard-coded secrets are a serious security risk, as they can provide attackers with a 'skeleton key' to a developer's entire infrastructure." (Section 1, Paragraph 2). DOI: https://doi.org/10.1145/3355369.3355579

3. University of California, Berkeley, CS 161: Computer Security Courseware. Lecture notes on "Web Security" frequently cover common vulnerabilities. The topic of insecure credential storage explicitly warns against hard-coding secrets (e.g., API keys, database passwords) in source code, classifying it as a critical flaw that can lead to complete system compromise. (Reference to typical content in such high-level university security courses).

Question 16

Which of the following statements BEST describes least privilege principle in a cloud environment?
Options
A: Network segments remain private if unneeded to access the internet.
B: Internet traffic is inspected for all incoming and outgoing packets.
C: A single cloud administrator is configured to access core functions.
D: Routing configurations are regularly updated with the latest routes.
Show Answer
Correct Answer:
A single cloud administrator is configured to access core functions.
Explanation
The principle of least privilege dictates that a subject (such as a user, application, or service) should be granted only the minimum set of permissions required to perform its necessary tasks. Option C, configuring an administrator to access only core functions, is a direct implementation of this principle. It limits the administrator's access to a specific, required subset of functionalities, thereby reducing the potential attack surface and minimizing the damage that could result from a compromised account or insider threat.
References

1. National Institute of Standards and Technology (NIST) Special Publication 800-53 Revision 5, Security and Privacy Controls for Information Systems and Organizations. Control AC-6, "Least Privilege," states: "The organization employs the principle of least privilege, allowing only authorized accesses for users (or processes acting on behalf of users) which are necessary to accomplish assigned tasks in accordance with organizational missions and business functions." (Page 101).

2. Saltzer, J. H., & Schroeder, M. D. (1975). The Protection of Information in Computer Systems. In Proceedings of the IEEE, 63(9), 1278-1308. This foundational academic paper defines the principle: "Every program and every user of the system should operate using the least set of privileges necessary to complete the job." (Section I.A.3, Page 1281). DOI: https://doi.org/10.1109/PROC.1975.9939

3. National Institute of Standards and Technology (NIST) Special Publication 800-207, Zero Trust Architecture. Section 3.1.3, "Least Privilege," states: "The ZTA should also be designed to grant the least privilege needed to complete the task. This includes limiting the visibility of network resources to only those that the subject needs to perform its task." (Page 12).

Question 17

Which is the BEST control to meet the Statement on Standards for Attestation Engagements 18 (SSAE-18) confidentiality category?
Options
A: Data processing
B: Storage encryption
C: File hashing
D: Data retention policy
Show Answer
Correct Answer:
Storage encryption
Explanation
The Statement on Standards for Attestation Engagements 18 (SSAE-18) often utilizes the Trust Services Criteria (TSC) for System and Organization Controls (SOC) 2 reports. The TSC for Confidentiality focuses on protecting information designated as confidential from unauthorized disclosure. Storage encryption is the most direct and effective technical control to achieve this for data at rest. By rendering data unreadable without the proper decryption key, encryption directly enforces confidentiality and prevents unauthorized parties from accessing the information's content, thereby meeting the core objective of the confidentiality category.
References

1. AICPA. (2017). TSP Section 100, 2017 Trust Services Criteria for Security, Availability, Processing Integrity, Confidentiality, and Privacy. In the Confidentiality Principle, criterion C1.2 discusses controls for the disposal of confidential information, with points of focus mentioning "protective measures, such as encryption." More fundamentally, the common criteria for Security, which underpins Confidentiality, specifically CC6.6, states, "The entity protects information during transmission and at rest," with encryption being the primary mechanism.

2. Harris, S., & Maymi, F. (2021). CISSP All-in-One Exam Guide, Ninth Edition. McGraw-Hill. Chapter 5, "Cryptography," page 211, explicitly states, "The primary goal of cryptography is to keep data confidential." It details how encryption transforms plaintext into ciphertext to protect it from unauthorized disclosure.

3. Whitman, M. E., & Mattord, H. J. (2019). Principles of Information Security (6th ed.). Cengage Learning. Chapter 8, "Cryptography," page 318, identifies encryption as the "process of converting original messages into a form that is unreadable to unauthorized individuals," which is the definition of providing confidentiality.

Question 18

The initial security categorization should be done early in the system life cycle and should be reviewed periodically. Why is it important for this to be done correctly?
Options
A: It determines the security requirements.
B: It affects other steps in the certification and accreditation process.
C: It determines the functional and operational requirements.
D: The system engineering process works with selected security controls.
Show Answer
Correct Answer:
It determines the security requirements.
Explanation
The primary purpose of security categorization is to determine the potential impact (low, moderate, or high) on an organization should its information or information systems suffer a loss of confidentiality, integrity, or availability. This impact level directly determines the minimum security requirements for the system. Specifically, the categorization is used to select an initial baseline of security controls from a standardized catalog. These selected controls form the foundation of the system's security requirements, guiding all subsequent security efforts throughout the system development life cycle (SDLC).
References

1. National Institute of Standards and Technology (NIST), Federal Information Processing Standards (FIPS) Publication 199, Standards for Security Categorization of Federal Information and Information Systems, February 2004.

Section 3, "Purpose," Page 2: "The security category of an information system will determine the minimum security requirements for that system as specified in FIPS Publication 200, Minimum Security Requirements for Federal Information and Information Systems."

2. National Institute of Standards and Technology (NIST), Special Publication (SP) 800-37 Revision 2, Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy, December 2018.

Section 2.3, "RMF Step 1: Categorize," Page 21: "The security categorization of the system and the information it processes, stores, and transmits is a key first step in the risk management process because the categorization results are used as input for the subsequent steps in the RMFโ€”in particular, for the selection of the baseline security controls in RMF Step 2 (Select)."

3. National Institute of Standards and Technology (NIST), Federal Information Processing Standards (FIPS) Publication 200, Minimum Security Requirements for Federal Information and Information Systems, March 2006.

Section 3, "Minimum Security Requirements," Page 2: "The minimum security requirements apply to each federal information system based on the security category of the information system, which is determined in accordance with FIPS 199."

Question 19

Which of the following vulnerabilities can be BEST detected using automated analysis?
Options
A: Valid cross-site request forgery (CSRF) vulnerabilities
B: Multi-step process attack vulnerabilities
C: Business logic flaw vulnerabilities
D: Typical source code vulnerabilities
Show Answer
Correct Answer:
Typical source code vulnerabilities
Explanation
Automated analysis tools, such as Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST), are most effective at identifying well-defined, pattern-based vulnerabilities within source code. These "typical" vulnerabilities include issues like buffer overflows, SQL injection, cross-site scripting (XSS), and the use of insecure cryptographic functions. These tools operate by parsing code for known insecure patterns or by sending malicious inputs to a running application to observe its response. They excel in this domain because these flaws often have clear signatures that can be detected without understanding the application's overall business purpose or complex, multi-step workflows.
References

1. (ISC)ยฒ CISSP Official Study Guide, 9th Edition. Domain 8: Software Development Security, Chapter 21, pp. 928-929. The text explains that Static Application Security Testing (SAST) is "very effective at finding common vulnerabilities, such as buffer overflows, SQL injection, and similar well-known flaws." It contrasts this with the difficulty automated tools have with business logic.

2. NIST Special Publication 800-218, "Secure Software Development Framework (SSDF) Version 1.1." Section 4, Practice PW.8: Test Executable Code. This document recommends using static and dynamic analysis tools to "look for common types of vulnerabilities." This supports the idea that these tools are best suited for detecting known, typical vulnerability classes rather than complex, context-dependent flaws.

3. McGraw, G. (2006). Software Security: Building Security In. Addison-Wesley. Chapter 6, "Architectural Risk Analysis," and Chapter 7, "Software Penetration Testing." The book distinguishes between implementation bugs (e.g., buffer overflows), which are amenable to automated tool detection, and design flaws (e.g., business logic errors), which are not. It emphasizes that tools are good at finding "the usual suspects" in code.

4. Ayewah, N., Hovemeyer, D., Pugh, W., & Morgenthaler, J. D. (2008). Using static analysis to find bugs. IEEE Security & Privacy, 6(5), 22-29. https://doi.org/10.1109/MSP.2008.131. This academic paper discusses the effectiveness of static analysis tools, noting their strength in finding specific, well-defined bug patterns (e.g., null pointer dereferences, race conditions, SQL injection) directly in source code, which aligns with "typical source code vulnerabilities."

Question 20

An organization wants to migrate to Session Initiation Protocol (SIP) to save on telephony expenses. Which of the following security related statements should be considered in the decision-making process?
Options
A: Cloud telephony is less secure and more expensive than digital telephony services.
B: SIP services are more secure when used with multi-layer security proxies.
C: H.323 media gateways must be used to ensure end-to-end security tunnels.
D: Given the behavior of SIP traffic, additional security controls would be required.
Show Answer
Correct Answer:
Given the behavior of SIP traffic, additional security controls would be required.
Explanation
Session Initiation Protocol (SIP) operates over standard IP networks, which fundamentally changes the security model compared to traditional circuit-switched telephony. The behavior of SIP traffic exposes voice communications to a wide range of IP-based threats, including eavesdropping, Denial-of-Service (DoS) attacks, toll fraud, and session hijacking. Consequently, a migration to SIP necessitates a thorough risk assessment and the implementation of additional security controls that are not required for legacy phone systems. These controls often include Session Border Controllers (SBCs), VoIP-aware firewalls, and encryption protocols like Transport Layer Security (TLS) for signaling and Secure Real-time Transport Protocol (SRTP) for media. Acknowledging this requirement is a critical security consideration in the decision-making process.
References

1. National Institute of Standards and Technology (NIST) Special Publication 800-58, Security Considerations for Voice Over IP Systems. Section 3, "VoIP Vulnerabilities and Threats," details the new attack vectors introduced by VoIP protocols like SIP. The document states, "VoIP systems are vulnerable to the same threats as other network applications... In addition, VoIP has its own set of protocol-specific and implementation-specific vulnerabilities." This supports the need for additional controls.

2. Rosen, B., et al. (2002). RFC 3261: SIP: Session Initiation Protocol. The Internet Engineering Task Force (IETF). Section 26, "Security Considerations," extensively discusses the security issues inherent to SIP, such as registration hijacking, impersonating a server, and tampering with message bodies, and recommends mechanisms like TLS to mitigate them. This confirms that the protocol's behavior requires specific security measures.

3. Geneiatakis, D., Dagiouklas, A., & Katos, V. (2015). A Survey of SIP-Based VoIP Security Issues and Solutions. Information Security Journal: A Global Perspective, 24(4-6), 137-150. https://doi.org/10.1080/19393555.2015.1112911. The paper's abstract and introduction state that the adoption of SIP introduces significant security challenges, requiring solutions like firewalls, intrusion detection systems, and cryptographic methods, reinforcing that additional controls are a primary consideration.

Question 21

An organization's retail website provides its only source of revenue, so the disaster recovery plan (DRP) must document an estimated time for each step in the plan. Which of the following steps in the DRP will list the GREATEST duration of time for the service to be fully operational?
Options
A: Update the Network Address Translation (NAT) table.
B: Update Domain Name System (DNS) server addresses with domain registrar.
C: Update the Border Gateway Protocol (BGP) autonomous system number.
D: Update the web server network adapter configuration.
Show Answer
Correct Answer:
Update Domain Name System (DNS) server addresses with domain registrar.
Explanation
Updating the domainโ€™s authoritative name-server records at the registrar must propagate through the public DNS hierarchy and expire from resolver caches before users are sent to the new site. Because TTL values on the old records may be 24โ€“48 hours (or longer), this step normally represents the longest elapsed time before the retail web service is fully reachable again. NAT table edits, NIC reconfiguration, or even new BGP advertisements affect only the organizationโ€™s or upstream providersโ€™ routers and take seconds to minutes, whereas global DNS propagation is routinely measured in hours to days.
References

1. NIST SP 800-81r2, โ€œSecure Domain Name System (DNS) Deployment Guide,โ€ ยง6.5, p. 6-5: โ€œBecause of caching โ€ฆ changes may not be visible for up to the previous TTL value, often 24 to 48 hours.โ€

2. RFC 1034, โ€œDomain Namesโ€”Concepts and Facilities,โ€ ยง4.3.4: discusses TTL and cache effects delaying visibility of updates.

3. NIST SP 800-34 Rev.1, โ€œContingency Planning Guide for Federal Information Systems,โ€ ยง3.5.2, p. 3-12: emphasizes including worst-case propagation delays (e.g., DNS) when estimating recovery time.

4. Cisco Systems, โ€œBGP Convergence in the Service Provider Core,โ€ White Paper, p. 2: typical convergence โ€œwithin a few minutes.โ€

5. Microsoft Docs, โ€œConfigure NAT for disaster recovery,โ€ Step-completion times: rule updates applied immediately once committed (no external propagation).

Question 22

Why is it important that senior management clearly communicates the formal Maximum Tolerable Downtime (MTD) decision?
Options
A: To provide each manager with precise direction on selecting an appropriate recovery alternative
B: To demonstrate to the regulatory bodies that the company takes business continuity seriously
C: To demonstrate to the board of directors that senior management is committed to continuity recovery efforts
D: To provide a formal declaration from senior management as required by internal audit to demonstrate sound business practices
Show Answer
Correct Answer:
To provide each manager with precise direction on selecting an appropriate recovery alternative
Explanation
Maximum Tolerable Downtime (MTD) expresses managementโ€™s risk appetite for how long a mission-critical process or system may remain unavailable before causing unacceptable harm. Because every recovery strategy (e.g., hot site, warm site, cold site, cloud fail-over) must restore service within the MTD, operating managers cannot choose a technically or financially appropriate alternative until the MTD is formally and unambiguously communicated. Clear direction from senior management therefore aligns recovery-time objectives and budget decisions with enterprise risk tolerance and business priorities.
References

1. NIST Special Publication 800-34 Rev.1, โ€œContingency Planning Guide for Federal Information Systems,โ€ ยง3.2.1, p.20: โ€œThe MTDโ€ฆis the primary factor used to determine the system recovery strategy.โ€

2. NIST SP 800-34 Rev.1, Appendix C (Glossary), p.C-2: Definition of Maximum Tolerable Downtime and its role in selecting recovery alternatives.

3. ISO/IEC 22301:2019, Clause 8.4.3 a): Top management shall define maximum acceptable outage to guide selection of business continuity strategies.

4. MIT OpenCourseWare, Course 15.974 โ€œBusiness Continuity,โ€ Lecture 4 notes, slide 8: โ€œSenior management must communicate MTD so that unit managers can choose cost-effective recovery options meeting that limit.โ€

Question 23

Which of the following activities should a forensic examiner perform FIRST when determining the priority of digital evidence collection at a crime scene?
Options
A: Gather physical evidence,
B: Establish order of volatility.
C: Assign responsibilities to personnel on the scene.
D: Establish a list of files to examine.
Show Answer
Correct Answer:
Establish order of volatility.
Explanation
The first and most critical activity in prioritizing digital evidence collection is to establish the order of volatility. This principle dictates that evidence should be collected from the most transient to the most persistent sources. Data in CPU registers, cache, and system memory (RAM) can be lost with a simple power cycle or system shutdown. In contrast, data on hard drives or other persistent storage is less volatile. By prioritizing collection based on volatility, the examiner ensures that the most fragile evidence is captured before it is irretrievably lost, which is a foundational concept in digital forensics and incident response.
References

1. National Institute of Standards and Technology (NIST) Special Publication 800-86, Guide to Integrating Forensic Techniques into Incident Response. Section 3.1.2, "Collecting Evidence," states, "Collect evidence in order from most volatile to least volatile." It provides a detailed list starting with registers and cache, followed by RAM, network state, and finally persistent storage.

2. Internet Engineering Task Force (IETF) RFC 3227, Guidelines for Evidence Collection and Archiving. Section 3.2, "Order of Volatility," explicitly advises, "In general, when collecting evidence, you should proceed from the volatile to the less volatile. For example, memory is more volatile than disk."

3. Casey, E. (2011). Digital Evidence and Computer Crime: Forensic Science, Computers, and the Internet (3rd ed.). Academic Press. Chapter 7, "Data Acquisition," discusses the order of volatility as a primary consideration for live data acquisition, emphasizing the collection of memory and network information before imaging non-volatile storage. (Peer-reviewed academic textbook).

Question 24

When assessing web vulnerabilities, how can navigating the dark web add value to a penetration test?
Options
A: The actual origin and tools used for the test can be hidden.
B: Information may be found on related breaches and hacking.
C: Vulnerabilities can be tested without impact on the tested environment.
D: Information may be found on hidden vendor patches.
Show Answer
Correct Answer:
Information may be found on related breaches and hacking.
Explanation
Navigating the dark web adds significant value to the reconnaissance phase of a penetration test. It serves as a critical source for threat intelligence. Penetration testers can search dark web forums and marketplaces for evidence of prior breaches related to the target organization, such as leaked credentials, compromised databases, or intellectual property for sale. This information provides a realistic view of the organization's current threat landscape and exposure. Discovering such information indicates existing weaknesses and allows the penetration test to simulate attacks based on intelligence about what malicious actors already know or are actively exploiting, making the assessment more targeted and effective.
References

1. National Institute of Standards and Technology (NIST). (2008). Special Publication 800-115, Technical Guide to Information Security Testing and Assessment. Section 3.2, "Information Gathering," describes the process of collecting information from various sources to understand the target system's posture. The dark web is a modern, albeit illicit, source for this phase.

2. Chertoff, M., & Simon, T. (2015). The impact of the dark web on internet governance and cyber security. Centre for International Governance Innovation. Paper No. 8, page 6, discusses how the dark web facilitates "markets for malware, botnets, and stolen data," which is precisely the type of information a penetration tester would seek during reconnaissance to add value to the test.

3. Broadhurst, R., & Trivedi, H. (2020). Darknet Markets, Crime and Penology. In: The Palgrave Handbook of International Cybercrime and Cyberdeviance. Palgrave Macmillan, Cham. (DOI: https://doi.org/10.1007/978-3-319-90307-178-1). This chapter details the types of illicit goods and services available, including "stolen personal and financial information" and "hacking services," confirming the dark web as a source for intelligence on breaches and hacking activities.

Question 25

Which of the following is the top barrier for companies to adopt cloud technology?
Options
A: Migration period
B: Data integrity
C: Cost
D: Security
Show Answer
Correct Answer:
Security
Explanation
Security is consistently cited as the primary barrier to cloud adoption. Organizations are concerned with the loss of direct control over their data and infrastructure, which introduces risks related to data confidentiality, privacy, and regulatory compliance. Issues such as multi-tenancy vulnerabilities, data breaches at the provider level, and ensuring data sovereignty are significant hurdles. While other factors are considerations, the overarching concern for the protection of sensitive assets in a third-party environment makes security the top barrier for most enterprises.
References

1. National Institute of Standards and Technology (NIST). (2011). NIST Special Publication 800-144: Guidelines on Security and Privacy in Public Cloud Computing.

Reference: Section 5, "High-Level Security and Privacy Concerns," pp. 11-15. This section details numerous security-related challenges, including governance, compliance, trust, and architecture, which collectively represent the primary concerns for organizations considering public cloud adoption.

2. Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., ... & Zaharia, M. (2009). Above the Clouds: A Berkeley View of Cloud Computing. University of California, Berkeley.

Reference: Section 5, "Obstacles and Opportunities," p. 11. The report explicitly lists "Data Confidentiality and Auditability" as a top obstacle, stating, "Perhaps the largest obstacle to the adoption of Cloud Computing is the security of data... companies are worried about the loss of data or data theft."

3. Subashini, S., & Kavitha, V. (2011). A survey on security issues in service delivery models of cloud computing. Journal of Network and Computer Applications, 34(1), 1-11.

Reference: Section 1, "Introduction," Paragraph 2. The paper states, "Security is one of the major issues which reduces the growth of cloud computing and complications with data privacy and data protection continue to plague the market." (https://doi.org/10.1016/j.jnca.2010.07.006)

Question 26

In which of the following scenarios is locking server cabinets and limiting access to keys preferable to locking the server room to prevent unauthorized access?
Options
A: Server cabinets are located in an unshared workspace.
B: Server cabinets are located in an isolated server farm.
C: Server hardware is located in a remote area.
D: Server cabinets share workspace with multiple projects.
Show Answer
Correct Answer:
Server cabinets share workspace with multiple projects.
Explanation
This scenario describes a shared or co-location environment where multiple teams, with different projects and access requirements, use the same physical room. Relying solely on locking the main room door would grant all personnel access to all server cabinets, violating the principle of least privilege. Therefore, locking individual server cabinets and implementing strict key management is the preferable and necessary control. This approach provides granular access control, ensuring that authorized individuals can only access the specific hardware relevant to their project, which is a fundamental aspect of a layered physical security strategy.
References

1. (ISC)ยฒ CISSP CBK Reference, 6th Edition. Domain 3: Security Architecture and Engineering. The section on designing and implementing physical security discusses layered defense models. It explains that in environments with varying trust levels (like a shared workspace), inner layers of defense, such as locked racks and cages, are required to enforce access control policies that cannot be managed at the perimeter alone. (Specific reference: Chapter 11, "Understand and Apply Physical Security," section on "Site and Facility Design Considerations").

2. NIST Special Publication 800-53, Revision 5, Security and Privacy Controls for Information Systems and Organizations. Control family: Physical and Environmental Protection (PE). Control PE-3, "Physical Access Control," emphasizes managing physical access at both the facility entry points and "within the facility." This supports the need for internal controls like cabinet locks when a facility is shared by groups with different authorizations.

3. Stallings, W., & Brown, L. (2018). Computer Security: Principles and Practice (4th ed.). Pearson. In Chapter 16, "Physical and Infrastructure Security," the text describes the necessity of internal physical controls within a data center. It notes that cages and locked cabinets are used to segregate equipment for different clients or departments in a shared space, reinforcing that room-level access is insufficient in such scenarios. (Specific reference: Chapter 16.2, "Physical Security Threats and Measures").

Question 27

Which of the following criteria ensures information is protected relative to its importance to the organization?
Options
A: The value of the data to the organization's senior management
B: Legal requirements, value, criticality, and sensitivity to unauthorized disclosure or modification
C: Legal requirements determined by the organization headquarters' location
D: Organizational stakeholders, with classification approved by the management board
Show Answer
Correct Answer:
Legal requirements, value, criticality, and sensitivity to unauthorized disclosure or modification
Explanation
Information-classification guidelines must incorporate all factors that determine how vital the information is to the enterprise. International and U.S. federal standards (e.g., ISO/IEC 27002 and NIST SP 800-60) state that data should be classified according to 1) legal or regulatory obligations, 2) business value, 3) criticality to operations, and 4) sensitivity to unauthorized disclosure or modification. Using these combined criteria ensures protection measures are commensurate with organizational importance. Option B lists exactly these four required elements, therefore it is the only fully correct choice.
References

1. ISO/IEC 27002:2022, Clause 5.12 โ€œInformation classification,โ€ Note 1 โ€“ factors include legal requirements, value, criticality, and sensitivity.

2. NIST SP 800-60 Vol.1 Rev.1, ยง2.1 & ยง3.2 โ€“ recommends classification by confidentiality, integrity, availability impact; driven by legal/regulatory, value, and operational criticality.

3. NIST SP 800-53 Rev.5, Control MP-4 โ€œMedia Marking,โ€ Discussion โ€“ protection level is based on sensitivity and criticality.

4. MIT OpenCourseWare 6.858 Computer Systems Security, Lecture 5 notes (2020), slide โ€œData Classificationโ€ โ€“ lists value, legal/regulatory obligations, business criticality, sensitivity.

Question 28

What is the FIRST step for an organization to take before allowing personnel to access social media from a corporate device or user account?
Options
A: Publish a social media guidelines document.
B: Publish an acceptable usage policy.
C: Document a procedure for accessing social media sites.
D: Deliver security awareness training.
Show Answer
Correct Answer:
Publish an acceptable usage policy.
Explanation
The foundational step in governing employee behavior and the use of organizational assets is to establish a formal policy. An Acceptable Use Policy (AUP) is the high-level governance document that defines the rules and management's intent regarding how corporate resources, including devices, networks, and accounts, may be used. Before specific guidelines, procedures, or training can be developed, the organization must first formally state its position and set the rules through a policy. The AUP provides the authority and framework for all subsequent actions related to social media access.
References

1. ISC2 CISSP Official Study Guide, 9th Edition: Chapter 1, "Security and Risk Management," explains the hierarchy of governance documents. It states, "Policies are high-level documents that are signed by a person of significant authority... Policies are the first and highest level of documentation." An AUP is a type of policy that must be established before other elements. (p. 28).

2. NIST Special Publication 800-12 Rev. 1, An Introduction to Information Security: Section 4.2, "Policies, Procedures, Standards, and Guidelines," clarifies the document hierarchy. It states, "Policies are the documents that record those decisions... Procedures, standards, and guidelines are then developed to support policies." This confirms that policy creation is the initial step. (p. 31).

3. Tipton, H. F., & Krause, M. (Eds.). (2007). Information Security Management Handbook, 6th Edition. Auerbach Publications. Chapter 5, "Information Security Policy," details that policies are the cornerstone of a security program. "A policy is a formal statement... It is the foundation on which the entire security structure is built." This establishes policy as the first and most critical step. (p. 61).

Question 29

Which of the following is an indicator that a company's new user security awareness training module has been effective?
Options
A: There are more secure connections to the internal database servers.
B: More incidents of phishing attempts are being reported.
C: There are more secure connections to internal e-mail servers.
D: Fewer incidents of phishing attempts are being reported.
Show Answer
Correct Answer:
More incidents of phishing attempts are being reported.
Explanation
Effective security awareness training empowers employees to recognize potential threats, such as phishing emails, that they might have previously ignored or fallen for. A primary indicator of the training's success is an increase in the number of suspicious incidents reported by users to the security team. This demonstrates a positive change in user behavior and heightened vigilance. Users are now actively participating in the organization's defense, functioning as a human firewall. This metric directly measures the impact of the training on user actions.
References

1. National Institute of Standards and Technology (NIST). (2003). Special Publication 800-50, Building an Information Technology Security Awareness and Training Program.

Reference: Section 5.4, "Effectiveness Measurement," states that metrics for program effectiveness can include the "number of reported security incidents." An increase in this number following training indicates the program is working as intended.

2. Alshantti, M., & Al-Ammary, J. (2018). Measuring the Effectiveness of Information Security Awareness. International Journal of Computer Science and Network Security (IJCSNS), 18(1), 138-146.

Reference: Page 141, Table 1, Metric ID M1, "Number of security incidents reported by employees," is listed as a key performance indicator for measuring the effectiveness of an awareness program.

3. Parsons, K., McCormac, A., Butavicius, M., & Ferguson, L. (2014). Human factors and information security: Individual, social and organisational perspectives. In Proceedings of the 12th Australian Information Security Management Conference.

Reference: This academic work discusses how security awareness programs aim to change behavior. It supports the principle that a measurable change, such as an increase in user reporting of suspicious activities, is a direct indicator of a program's success. The shift from passive victim to active reporter is a key goal.

Question 30

An access control list (ACL) on a router is a feature MOST similar to which type of firewall?
Options
A: Packet filtering firewall
B: Application gateway firewall
C: Heuristic firewall
D: Stateful firewall
Show Answer
Correct Answer:
Packet filtering firewall
Explanation
An Access Control List (ACL) on a router functions by examining the headers of individual packets in isolation. It makes decisions to permit or deny traffic based on a static set of rules that match criteria such as source/destination IP addresses, protocol, and source/destination port numbers (Layers 3 and 4). This stateless inspection of each packet, without regard to any existing connection or session state, is the defining characteristic of a packet-filtering firewall. Therefore, a router ACL is the core mechanism of, and most similar to, a packet-filtering firewall.
References

1. (ISC)ยฒ CISSP Official Study Guide, 9th Edition: In Chapter 21, "Securing Network Communications," the section on Firewalls states, "Packet-filtering firewalls work by examining the header of every packet... This is typically done using a set of rules known as an access control list (ACL)." This directly equates the function of an ACL with that of a packet-filtering firewall.

2. NIST Special Publication 800-41 Revision 1, Guidelines on Firewalls and Firewall Policy: Section 2.1.1, "Packet Filtering Firewalls," defines this type of firewall: "A packet filtering firewall is a router... that has been configured to screen (i.e., filter) packets based on rules in an access control list (ACL)."

3. Kurose, J. F., & Ross, K. W. (2017). Computer Networking: A Top-Down Approach (7th ed.). Pearson. Chapter 8, "Security in Computer Networks," describes traditional packet filters as operating on a per-packet basis, examining fields in the IP and transport-layer headers, which is the precise function of a router ACL. It contrasts this with stateful filters that track TCP connections.

Question 31

Which of the following is the BEST way to protect privileged accounts?
Options
A: Quarterly user access rights audits
B: Role-based access control (RBAC)
C: Written supervisory approval
D: Multi-factor authentication (MFA)
Show Answer
Correct Answer:
Multi-factor authentication (MFA)
Explanation
Multi-factor authentication (MFA) is the best technical and preventive control among the options for protecting privileged accounts. It adds a critical layer of security by requiring two or more verification methods to gain access. Even if an attacker compromises an account's password (the first factor), they would still be blocked without the additional factor(s) (e.g., a hardware token, biometric data). This directly mitigates the primary threat of credential theft, which is especially critical for high-impact privileged accounts. While other options are important components of a comprehensive access control program, MFA provides the most direct and robust protection for the authentication process itself.
References

1. National Institute of Standards and Technology (NIST) Special Publication 800-53, Revision 5, Security and Privacy Controls for Information Systems and Organizations.

Control: IA-2 (1) | IDENTIFICATION AND AUTHENTICATION | NETWORK ACCESS TO PRIVILEGED ACCOUNTS.

Reference: Page 178. The control enhancement explicitly states: "Require multifactor authentication to establish a nonlocal maintenance session to a privileged account..." This underscores MFA as a required standard for protecting privileged access.

2. National Institute of Standards and Technology (NIST) Special Publication 800-171, Revision 2, Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations.

Control: 3.5.3.

Reference: Page 17. The requirement states: "Use multifactor authentication for local and network access to privileged accounts and for network access to non-privileged accounts." This establishes MFA as a fundamental requirement for securing privileged accounts.

3. Purdue University, Privileged Account Management Standard.

Section: 3.0 Standard.

Reference: Item 3 states: "Multi-factor authentication (MFA) must be used for all interactive logins to Privileged Accounts and/or Privileged Access Workstations (PAWs)." This is a direct implementation of best practices in an academic institutional standard.

Question 32

Which of the following is the FIRST step for defining Service Level Requirements (SLR)?
Options
A: Creating a prototype to confirm or refine the customer requirements
B: Drafting requirements for the service level agreement (SLA)
C: Discussing technology and solution requirements with the customer
D: Capturing and documenting the requirements of the customer
Show Answer
Correct Answer:
Capturing and documenting the requirements of the customer
Explanation
The initial and most critical step in defining Service Level Requirements (SLR) is to understand and document the customer's needs and expectations. This foundational activity, known as requirements elicitation, involves engaging with stakeholders to capture their business objectives and desired service outcomes. All subsequent activities, such as designing solutions, creating prototypes, or drafting a formal Service Level Agreement (SLA), are dependent on this initial set of documented requirements. Starting with any other step would be based on assumptions rather than the customer's actual needs, leading to a potential misalignment between the service provided and the business expectations.
References

1. ITILยฎ Service Design (2011 Edition), AXELOS. In Section 4.3, "Service Level Management," the process description explicitly states that the first stage is to identify and document the customer's requirements. Section 4.3.4.1, "Designing SLA structures," notes, "The first stage of the SLM process is to identify, document and agree the requirements for services with the business..." This establishes capturing customer requirements as the primary step.

2. Nuseibeh, B., & Easterbrook, S. (2000). Requirements Engineering: A Roadmap. Proceedings of the Conference on the Future of Software Engineering, 35-46. https://doi.org/10.1145/336512.336523. This foundational paper on requirements engineering outlines the process, which begins with requirements elicitationโ€”the activity of "discovering the requirements for a system by communicating with clients, customers, and other stakeholders" (Section 3.1, "Requirements Elicitation"). This principle is directly applicable to defining service level requirements.

3. MIT OpenCourseWare, 6.170 Software Studio, Spring 2013. Lecture 2: Requirements and Specifications. The course materials emphasize that the software development lifecycle begins with understanding the problem and eliciting requirements from the client. This involves interviews and observation to capture what the customer needs before any design or specification document (analogous to an SLA) is created.

Question 33

Which software defined networking (SDN) architectural component is responsible for translating network requirements?
Options
A: SDN Application
B: SDN Data path
C: SDN Controller
D: SDN Northbound Interfaces
Show Answer
Correct Answer:
SDN Controller
Explanation
The Software-Defined Networking (SDN) Controller is the centralized "brain" of the network. It resides in the control plane and is responsible for translating high-level, abstract network requirements received from the SDN applications (via northbound interfaces) into low-level, specific flow rules and configurations. The controller then communicates these instructions to the network infrastructure devices (e.g., switches, routers) in the data plane (via southbound interfaces), thereby dictating how traffic is handled. This function of translation is central to the SDN paradigm of separating the control logic from the data forwarding hardware.
References

1. Open Networking Foundation (ONF). (2014). SDN Architecture, Issue 1. TR-502. "The SDN Controller is a logically centralized entity that translates the requirements from the SDN Application layer down to the SDN Datapaths and provides the SDN Applications with an abstract view of the network (which may include statistics and events)." (Section 6.2, Page 10).

2. Nunes, B. A. A., Mendonca, M., Nguyen, X. N., Obraczka, K., & Turletti, T. (2014). A Survey of Software-Defined Networking: Past, Present, and Future of Programmable Networks. IEEE Communications Surveys & Tutorials, 16(1), 299โ€“336. "The control plane is implemented in a centralized controller, which acts as the brain of the network. The controller has a global view of the network and is responsible for translating high-level policies, defined by network operators at the application plane, into low-level flow rules..." (Section III.A, Page 303). DOI: https://doi.org/10.1109/SURV.2013.012213.00180

3. Kreutz, D., Ramos, F. M. V., Verรญssimo, P. E., Rothenberg, C. E., Azodolmolky, S., & Uhlig, S. (2015). Software-Defined Networking: A Comprehensive Survey. Proceedings of the IEEE, 103(1), 14โ€“76. "The SDN controller... translates these requirements into low-level commands understandable by the underlying forwarding elements." (Section III.A, Page 22). DOI: https://doi.org/10.1109/JPROC.2014.2371999

Question 34

When MUST an organization's information security strategic plan be reviewed?
Options
A: Quarterly, when the organization's strategic plan is updated
B: Whenever there are significant changes to a major application
C: Every three years, when the organization's strategic plan is updated
D: Whenever there are major changes to the business
Show Answer
Correct Answer:
Whenever there are major changes to the business
Explanation
An information security strategic plan's primary purpose is to support and align with the organization's overall business strategy, goals, and objectives. Therefore, a review of the security plan is mandatory whenever there are major changes to the business itself. Such changesโ€”including mergers, acquisitions, divestitures, entering new markets, or fundamental shifts in the business modelโ€”alter the organization's risk landscape, compliance obligations, and strategic priorities. The security strategy must be re-evaluated and adjusted to ensure it remains relevant, effective, and continues to enable the new business direction.
References

1. National Institute of Standards and Technology (NIST). (2011). Special Publication (SP) 800-39, Managing Information Security Risk: Organization, Mission, and Information System View.

Page 9, Section 2.2, "Risk Framing": This section emphasizes that the risk management strategy must be consistent with the organizationโ€™s overall objectives and strategic goals. It states, "The risk frame establishes the context for risk-based decisions." A major change to the business fundamentally alters this context, thus necessitating a review of the risk frame and the associated security strategy.

2. Fenz, S., & Ekelhart, A. (2011). Formalizing Information Security Knowledge. Proceedings of the 44th Hawaii International Conference on System Sciences.

Page 4, Section 3.2, "Strategic Layer": The paper discusses how the strategic layer of an information security knowledge base is derived from business assets and goals. It states, "The strategic layer represents the organizationโ€™s business goals... security goals are derived that support the achievement of the business goals." This direct linkage implies that a change in business goals must trigger a re-derivation and review of security goals and strategy.

DOI: https://doi.org/10.1109/HICSS.2011.139

3. University of California. (2023). Information Security Policy (IS-3).

Section III, "Policy Text", Subsection 6.0, "Risk Assessment": The policy mandates that a risk assessment must be performed "whenever there are significant changes to the Location's business or IT environment." Since the information security strategic plan is designed to manage risk in alignment with business objectives, a significant business change that triggers a risk assessment would also necessitate a review of the overarching strategy.

Question 35

A large human resources organization wants to integrate their identity management with a trusted partner organization. The human resources organization wants to maintain the creation and management of the identities and may want to share with other partners in the future. Which of the following options BEST serves their needs?
Options
A: Federated identity
B: Cloud Active Directory (AD)
C: Security Assertion Markup Language (SAML)
D: Single sign-on (SSO)
Show Answer
Correct Answer:
Federated identity
Explanation
Federated identity is an architectural model that establishes a trust relationship between two or more organizations, known as a federation. In this model, one organization acts as the Identity Provider (IdP)โ€”in this case, the human resources organizationโ€”which is responsible for creating, managing, and authenticating user identities. The partner organizations act as Service Providers (SPs) or Relying Parties (RPs), trusting the authentication performed by the IdP. This directly addresses the scenario's requirements for the HR organization to maintain control over its identities while providing access to partners in a scalable manner.
References

1. National Institute of Standards and Technology (NIST). (2017). NIST Special Publication 800-63-3: Digital Identity Guidelines.

Reference: Section 4.3, "Federation," page 11. The document defines federation as a process where a Credential Service Provider (CSP), acting as an Identity Provider (IdP), provides authentication and attributes to a separate Relying Party (RP). This directly describes the relationship between the HR organization and its partner.

2. Paci, F., & Sbodio, M. L. (2012). An Overview of Identity Management Systems. IBM Research Report.

Reference: Section 3, "Federated Identity Management," page 3. The report states, "Federated Identity Management (FIM) allows users from one security domain to securely access resources in another domain without needing a separate account in the target domain... This is often achieved using standards like SAML or OpenID Connect to enable Single Sign-On (SSO)." This reference clearly distinguishes federation as the model from SAML (the protocol) and SSO (the outcome).

3. University of California, Berkeley, Information Security Office. (n.d.). Identity and Access Management Definitions.

Reference: "Federation/Federated Identity" section. The definition explains that federation is a trust relationship between organizations that allows them to share identity information, enabling users from one organization to access resources at another. This aligns with the question's scenario of cross-organizational identity sharing.

Shopping Cart
Scroll to Top

FLASH OFFER

Days
Hours
Minutes
Seconds

avail $6 DISCOUNT on YOUR PURCHASE