ISC2 CISSP Exam Questions 2025

Updated:

Our CISSP Exam Questions deliver authentic, up-to-date content for the ISC2 Certified Information Systems Security Professional (CISSP) certification. Each question is reviewed by cybersecurity experts and includes verified answers with clear explanations to strengthen your understanding across all eight CISSP domains—from security and risk management to software development security. With access to our exam simulator, you can practice under real exam conditions and confidently prepare to pass on your first attempt.

Exam Questions

Question 1

In a multi-tenant cloud environment, what approach will secure logical access to assets?
Options
A: Hybrid cloud
B: Transparency/Auditability of administrative access
C: Controlled configuration management (CM)
D: Virtual private cloud (VPC)
Show Answer
Correct Answer:
Virtual private cloud (VPC)
Explanation
A Virtual Private Cloud (VPC) is a fundamental security approach for achieving logical isolation in a multi-tenant cloud environment. It allows an organization to provision a logically segregated section of a public cloud, creating a private network space. Within this VPC, the organization can define its own IP address ranges, subnets, route tables, and network gateways. This effectively creates a virtual network boundary that isolates the tenant's assets from those of other tenants, even though they may reside on the same physical hardware. This logical segregation is the primary method for securing logical access and preventing cross-tenant data exposure in an Infrastructure as a Service (IaaS) model.
References

1. Cloud Security Alliance. (2017). Security Guidance for Critical Areas of Focus in Cloud Computing v4.0. Domain 7: Infrastructure Security, Section 7.2, p. 89. The document states, "The virtual network provides logical isolation... This allows customers to segment their resources, not just from other customers, but also from their own resources."

2. National Institute of Standards and Technology. (2011). NIST Special Publication 500-292: NIST Cloud Computing Reference Architecture. Section 5.3.1.2, "Resource Pooling & Multi-tenancy," p. 17. This section discusses how multi-tenancy requires logical isolation of shared resources, which is the problem that VPCs are designed to solve.

3. Amazon Web Services. (2023). What is Amazon VPC?. AWS Documentation. The official documentation defines a VPC as a service that "lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define."

4. Armbrust, M., et al. (2009). Above the Clouds: A Berkeley View of Cloud Computing. University of California, Berkeley, Technical Report No. UCB/EECS-2009-28. Section 4, "Top 10 Obstacles and Opportunities for Cloud Computing," p. 8. The report discusses the obstacle of "Data Confidentiality and Auditability," for which network and machine-level isolation (as provided by a VPC) is a key solution.

Question 2

A company hired an external vendor to perform a penetration test ofa new payroll system. The company’s internal test team had already performed an in-depth application and security test of the system and determined that it met security requirements. However, the external vendor uncovered significant security weaknesses where sensitive personal data was being sent unencrypted to the tax processing systems. What is the MOST likely cause of the security issues?
Options
A: Failure to perform interface testing
B: Failure to perform negative testing
C: Inadequate performance testing
D: Inadequate application level testing
Show Answer
Correct Answer:
Failure to perform interface testing
Explanation
The vulnerability was discovered in the data transmission between the new payroll system and the external tax processing system. This points to a failure in testing the communication link, or interface, between these two distinct systems. Interface testing is specifically designed to verify that data is exchanged correctly and securely between different software components or systems. The internal team likely focused on the application's internal functions and security, but overlooked the security of the data in transit to an external entity, which is the primary goal of interface testing.
References

1. National Institute of Standards and Technology (NIST). (2008). Special Publication 800-115, Technical Guide to Information Security Testing and Assessment.

Reference: Section 3.5, "Application Security Testing," discusses the need to test all components of an application, including its interfaces with other systems. It notes that security testing should "verify that the application properly enforces security for both valid and invalid operations" and that this includes how it communicates with other services. The described scenario is a failure in this specific area.

2. Saltzer, J. H., & Schroeder, M. D. (1975). The Protection of Information in Computer Systems. Proceedings of the IEEE, 63(9), 1278–1308.

Reference: Section I.A.3, "Principle of Least Privilege," and Section I.A.5, "Principle of Complete Mediation." While not a direct definition of interface testing, these foundational security principles, taught in university curricula, imply that every access and data exchange between systems (an interface) must be validated. The failure to encrypt data at the interface violates the principle of protecting data as it crosses trust boundaries. (DOI: https://doi.org/10.1109/PROC.1975.9939)

3. University of Toronto, Department of Computer Science. (2018). CSC301: Introduction to Software Engineering, Lecture 11 - Software Testing.

Reference: Slide 21, "Integration Testing." The lecture material defines integration testing as testing the interfaces between components. It distinguishes between "Big Bang" and incremental approaches. This academic source establishes that testing interfaces between system components is a distinct and critical phase of software testing. The scenario highlights a failure in this specific phase.

Question 3

Which of the following is the MOST effective method of detecting vulnerabilities in web-based applications early in the secure Software Development Life Cycle (SDLC)?
Options
A: Web application vulnerability scanning
B: Application fuzzing
C: Code review
D: Penetration testing
Show Answer
Correct Answer:
Code review
Explanation
Code review, which includes both manual inspection and automated Static Application Security Testing (SAST), is the most effective method for detecting vulnerabilities early in the SDLC. It is performed during the development/implementation phase directly on the source code before the application is compiled or deployed. This "shift left" approach allows developers to identify and remediate security flaws, such as injection vulnerabilities or improper error handling, at the earliest and least expensive point in the lifecycle. The other options are dynamic testing methods that require a running application, placing them later in the SDLC.
References

1. ISC2 CISSP Official Study Guide (9th ed.). (2021). Chapter 21: Secure Software Development. pp. 898-899. The text explicitly places code review and static code analysis within the "Software Development and Coding" phase, emphasizing its role in early detection before testing begins.

2. NIST Special Publication 800-218. (Feb 2022). Secure Software Development Framework (SSDF) Version 1.1: Recommendations for Mitigating the Risk of Software Vulnerabilities. Section 4, Practice PW.5. This practice, "Review All Code," states, "The software producer reviews all code to identify vulnerabilities and verify compliance with security requirements... This can be accomplished through manual and/or automated means." This is a core practice applied to the code artifact itself.

3. OWASP Foundation. (2021). OWASP Software Assurance Maturity Model (SAMM) v2.0. Design - Security Testing, Stream B: Application Testing. The model shows Static Application Security Testing (SAST), an automated form of code review, as a foundational activity that can be integrated directly into the CI/CD pipeline during the build process, far earlier than dynamic testing or penetration testing.

4. Kissel, R., Stine, K., et al. (Oct 2008). NIST Special Publication 800-115: Technical Guide to Information Security Testing and Assessment. Section 5-2. The document distinguishes between code review (a static analysis technique) and security testing techniques like penetration testing and vulnerability scanning, which require an operational system.

Question 4

A malicious user gains access to unprotected directories on a web server. Which of the following is MOST likely the cause for this information disclosure?
Options
A: Security misconfiguration
B: Cross-site request forgery (CSRF)
C: Structured Query Language injection (SQLi)
D: Broken authentication management
Show Answer
Correct Answer:
Security misconfiguration
Explanation
Security misconfiguration is the most likely cause. This vulnerability category encompasses failures to implement all appropriate security controls for a server or web application, or the incorrect configuration of those controls. An "unprotected directory" is a classic example, where the web server is misconfigured to allow directory listing or has improper file system permissions, leading to unauthorized access and information disclosure. This is a direct failure in securing the server's configuration, rather than a flaw in application logic or authentication mechanisms.
References

1. OWASP Foundation. (2021). OWASP Top 10:2021. A05:2021-Security Misconfiguration. The description explicitly includes "directory listing is not disabled on the server" as a common example of this vulnerability. (Reference: owasp.org/Top10/A052021-SecurityMisconfiguration/)

2. National Institute of Standards and Technology (NIST). (2020). Security and Privacy Controls for Information Systems and Organizations (NIST Special Publication 800-53, Revision 5). Control CM-7 "Least Functionality" requires that the organization "configures the information system to provide only essential capabilities," which includes disabling functions like directory listing. A failure to do so is a configuration management failure. (Page 138, Control CM-7).

3. Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in Computing (5th ed.). Pearson Education. Chapter 8, "Web Security," discusses how improper server configuration is a primary source of web vulnerabilities, distinct from injection attacks or authentication flaws. (Section 8.3, "Web Server Vulnerabilities").

Question 5

Which of the following security objectives for industrial control systems (ICS) can be adapted to securing any Internet of Things (IoT) system?
Options
A: Prevent unauthorized modification of data.
B: Restore the system after an incident.
C: Detect security events and incidents.
D: Protect individual components from exploitation
Show Answer
Correct Answer:
Protect individual components from exploitation
Explanation
While all listed options are valid security objectives for both Industrial Control Systems (ICS) and Internet of Things (IoT) systems, protecting individual components is the most foundational and universally adaptable principle. The nature of IoT involves a massive number of distributed, often physically accessible, and resource-constrained devices. The security of the entire IoT ecosystem fundamentally relies on the security of each individual component (the "thing"). If a component is exploited, higher-level objectives like data integrity, system restoration, and event detection are compromised. This principle is directly inherited from ICS security, where protecting individual controllers (e.g., PLCs, RTUs) is a critical objective.
References

1. National Institute of Standards and Technology (NIST) Special Publication 800-82 Rev. 2, Guide to Industrial Control Systems (ICS) Security. Section 3.2, "ICS Security Program Development," outlines recommended security controls. Control family System and Information Integrity (SI), specifically SI-7 "Software, Firmware, and Information Integrity," and the general principle of defense-in-depth emphasize protecting individual system components from unauthorized changes.

2. National Institute of Standards and Technology (NIST) Internal Report (NISTIR) 8259A, IoT Device Cybersecurity Capability Core Baseline. This document establishes a baseline of security capabilities for IoT devices. The capabilities listed, such as Device Identification (Section 3.1), Device Configuration (Section 3.2), and Software Update (Section 3.5), are all focused on securing and managing the individual component to protect it from exploitation.

3. Al-Garadi, M. A., Mohamed, A., Al-Ali, A. K., Du, X., Ali, I., & Guizani, M. (2020). A Survey of Machine and Deep Learning Methods for Internet of Things (IoT) Security. IEEE Communications Surveys & Tutorials, 22(3), 1646-1685. DOI: 10.1109/COMST.2020.2988293. This survey discusses the convergence of security challenges in ICS and IoT, noting that "the first line of defense for IoT systems is to secure the IoT devices themselves" (Section II.A). This highlights the foundational importance of component-level protection.

Question 6

Wi-Fi Protected Access 2 (WPA2) provides users with a higher level of assurance that their data will remain protected by using which protocol?
Options
A: Secure Shell (SSH)
B: Internet Protocol Security (IPsec)
C: Secure Sockets Layer (SSL)
D: Extensible Authentication Protocol (EAP)
Show Answer
Correct Answer:
Extensible Authentication Protocol (EAP)
Explanation
Wi-Fi Protected Access 2 (WPA2), in its more secure Enterprise mode, implements the IEEE 802.1X standard for port-based network access control. This standard utilizes the Extensible Authentication Protocol (EAP) as its authentication framework. EAP provides a standardized transport for authentication messages between a client device (supplicant), the wireless access point (authenticator), and a central authentication server (e.g., RADIUS). This architecture allows for the use of various strong authentication methods, such as certificates (EAP-TLS) or credentials, providing a significantly higher level of assurance and centralized user management compared to the pre-shared key (PSK) model.
References

1. National Institute of Standards and Technology (NIST) Special Publication 800-97, Establishing Wireless Robust Security Networks: A Guide to IEEE 802.11i, February 2007. Section 3.1, "IEEE 802.1X Port-Based Access Control," states, "IEEE 802.1X uses the Extensible Authentication Protocol (EAP) [RFC 3748] to exchange authentication messages between the supplicant and the authentication server."

2. IEEE Std 802.11™-2020, IEEE Standard for Information Technology--Telecommunications and Information Exchange between Systems Local and Metropolitan Area Networks--Specific Requirements - Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications. Clause 12.7.2, "AKM suite selector definitions," defines Authentication and Key Management (AKM) suites, including those based on IEEE 802.1X, which is the mechanism that employs EAP.

3. Carnegie Mellon University, Software Engineering Institute (SEI), Securely Deploying 802.11 Wireless Networks with Microsoft Windows, January 2009. Page 11, Section 3.2.2, "WPA2-Enterprise," states, "WPA2-Enterprise uses 802.1X/EAP for authentication. With 802.1X/EAP, a user must authenticate to the network before being granted access."

Question 7

A software development company has a short timeline in which to deliver a software product. The software development team decides to use open-source software libraries to reduce the development time. What concept should software developers consider when using open-source software libraries?
Options
A: Open source libraries contain known vulnerabilities, and adversaries regularly exploit those vulnerabilities in the wild.
B: Open source libraries can be used by everyone, and there is a common understanding that the vulnerabilities in these libraries will not be exploited.
C: Open source libraries are constantly updated, making it unlikely that a vulnerability exists for an adversary to exploit.
D: Open source libraries contain unknown vulnerabilities, so they should not be used.
Show Answer
Correct Answer:
Open source libraries contain known vulnerabilities, and adversaries regularly exploit those vulnerabilities in the wild.
Explanation
The primary security concern when incorporating open-source software (OSS) is managing the risk of inherited vulnerabilities. OSS components, like any software, can contain flaws. Because these libraries are widely used, a single discovered vulnerability can affect thousands of applications, making them a high-value target for adversaries. Security frameworks like the OWASP Top 10 specifically highlight "Vulnerable and Outdated Components" as a critical risk. Therefore, developers must implement processes, such as Software Composition Analysis (SCA), to identify, track, and remediate known vulnerabilities in the third-party libraries they use.
References

1. National Institute of Standards and Technology (NIST). (2022). Secure Software Development Framework (SSDF) Version 1.1: Recommendations for Mitigating the Risk of Software Vulnerabilities (NIST Special Publication 800-218).

Section/Page: Practice PW.5, "Acquire and use only securely developed third-party components." Page 13 states, "A component with known vulnerabilities could be exploited by attackers to compromise the software, so it is important to know which components are used in the software and which vulnerabilities have been identified in those components."

2. OWASP Foundation. (2021). OWASP Top 10:2021.

Section/Page: A06:2021 – Vulnerable and Outdated Components. The document states, "You are likely vulnerable... If you do not know the versions of all components you use (both client-side and server-side). This includes components you directly use as well as nested dependencies... If you do not scan for vulnerabilities regularly and subscribe to security bulletins related to the components you use." This directly supports the idea that known vulnerabilities in components are a major risk.

3. Healy, J. C., & Mylopoulos, J. (2002). Requirements and Early-Phase Software Engineering. In van der Hoek, A. (Ed.), University of California, Irvine, Informatics 125 course materials.

Section/Page: In discussions on Non-Functional Requirements (NFRs) for security, course materials often reference the need to manage dependencies. The principle is that using third-party components, including open-source, means inheriting their security posture. The system's security is dependent on the security of its weakest component, which could be an unpatched open-source library. This is a foundational concept in secure software engineering taught in university curricula.

Question 8

According to the (ISC)? ethics canon “act honorably, honestly, justly, responsibly, and legally," which order should be used when resolving conflicts?
Options
A: Public safety and duties to principals, individuals, and the profession
B: Individuals, the profession, and public safety and duties to principals
C: Individuals, public safety and duties to principals, and the profession
D: The profession, public safety and duties to principals, and individuals
Show Answer
Correct Answer:
Public safety and duties to principals, individuals, and the profession
Explanation
The Preamble to the (ISC)² Code of Ethics establishes a clear order of priority for resolving conflicts among the four canons. The first and most important canon is to "Protect society, the common good, necessary public trust and confidence, and the infrastructure." This principle, broadly defined as public safety, takes precedence over all other obligations. Duties to principals (employers/clients) and the profession follow in priority. Therefore, when a conflict arises, the professional's primary duty is to the public, followed by their principal, and finally to the profession itself. Option A is the only choice that reflects this mandated hierarchy.
References

1. (ISC)². (2024). ISC2 Code of Ethics. Preamble. The document states, "The canons, in the order of their priority, are: 1. Protect society... 2. Act honorably... 3. Provide diligent and competent service to principals. 4. Advance and protect the profession." It further clarifies, "Therefore, any conflict between these canons should be resolved in the order of the canons."

2. Stewart, J. M., Chapple, M., & Gibson, D. (2021). Official (ISC)2 CISSP CBK Reference (6th ed.). Sybex. In Domain 1: Security and Risk Management, the section "Understand, Adhere to, and Promote Professional Ethics" explicitly discusses the hierarchy of the canons, emphasizing that the duty to protect society (the first canon) is paramount.

3. HHS.gov, Office for Human Research Protections. (n.d.). The Belmont Report. While not an (ISC)² source, this foundational U.S. government document on ethics in research establishes the principle of beneficence (do no harm, maximize benefits), which aligns with the CISSP ethic of prioritizing public safety above other concerns. This principle is a cornerstone of ethical frameworks taught in university-level programs. (Section C: Applications, Paragraph 1).

Question 9

When conducting a remote access session using Internet Protocol Security (IPSec), which Open Systems Interconnection (OSI) model layer does this connection use?
Options
A: Transport
B: Network
C: Data link
D: Presentation
Show Answer
Correct Answer:
Network
Explanation
Internet Protocol Security (IPSec) is a protocol suite designed to secure Internet Protocol (IP) communications. It operates at the Network Layer (Layer 3) of the Open Systems Interconnection (OSI) model, the same layer as IP itself. IPSec functions by authenticating and/or encrypting each IP packet in a data stream, adding its own security headers (Authentication Header - AH, or Encapsulating Security Payload - ESP) to the original IP packet. This process is transparent to the upper layers (Transport, Application), which are unaware that the underlying communication is being secured at the network level.
References

1. National Institute of Standards and Technology (NIST) Special Publication 800-77, Guide to IPsec VPNs. Section 2.1, "IPsec Overview," states: "IPsec is a suite of protocols for securing IP communications at the network layer by authenticating and/or encrypting each IP packet in a data stream."

2. Internet Engineering Task Force (IETF) RFC 4301, Security Architecture for the Internet Protocol. Section 1.1, "Security Services," states: "IPsec is designed to provide security services at the IP layer, enabling it to protect a variety of higher-level protocols..." The IP layer corresponds to the Network Layer of the OSI model.

3. Kurose, J. F., & Ross, K. W. (2017). Computer Networking: A Top-Down Approach (7th ed.). Chapter 8, "Security in Computer Networks," explicitly categorizes IPSec as a network-layer security protocol in Section 8.7, "Network-Layer Security: IPsec and Virtual Private Networks." This is a standard textbook in university computer science curricula.

Question 10

Which of the following types of web-based attack is happening when an attacker is able to send a well-crafted, malicious request to an authenticated user without the user realizing it?
Options
A: ross-Site Scripting (XSS)
B: Cross-Site request forgery (CSRF)
C: Cross injection
D: Broken Authentication And Session Management
Show Answer
Correct Answer:
Cross-Site request forgery (CSRF)
Explanation
Cross-Site Request Forgery (CSRF) is an attack that tricks an authenticated user's browser into submitting a forged, malicious request to a trusted website. The web application processes this request because it is accompanied by the user's valid session credentials (e.g., cookies), thus performing an action on behalf of the user without their consent or knowledge. The attack's success relies on the user having an active session with the vulnerable application, and the application's inability to distinguish a legitimate request from a forged one initiated by a different site.
References

1. The Open Web Application Security Project (OWASP). (n.d.). Cross-Site Request Forgery (CSRF). OWASP Cheat Sheet Series. Retrieved from https://cheatsheetseries.owasp.org/cheatsheets/Cross-SiteRequestForgeryPreventionCheatSheet.html. In the introduction, it defines CSRF as "an attack that forces an end user to execute unwanted actions on a web application in which they’re currently authenticated."

2. Zeldovich, N., & Kaashoek, F. (2014). 6.858 Computer Systems Security, Fall 2014 - Lecture 16: Web security. MIT OpenCourseWare. Retrieved from https://ocw.mit.edu/courses/6-858-computer-systems-security-fall-2014/resources/mit6858f14lec16/. Slide 21 defines CSRF: "Malicious web site causes user’s browser to send a request to an honest site, using the user’s credentials (cookies) for that honest site."

3. Johns, M. (2008). Breaking the Web's Cookie Jar: Cross-Site Request Forgery and its mitigation. In Sicherheit 2008: Sicherheit, Schutz und Zuverlässigkeit. Lecture Notes in Informatics (LNI), P-128. Page 231. This academic paper states, "Cross-Site Request Forgery (CSRF) is a form of attack where a web site, email, or program causes a user’s web browser to perform an unwanted action on a trusted site."

Sale!
Total Questions1,486
Last Update Check September 25, 2025
Online Simulator PDF Downloads
50,000+ Students Helped So Far
$30.00 $65.00 54% off
Rated 4.89 out of 5
4.9 (56 reviews)

Instant Download & Simulator Access

Secure SSL Encrypted Checkout

100% Money Back Guarantee

What Users Are Saying:

Rated 5 out of 5

“The practice questions were spot on. Felt like I had already seen half the exam. Passed on my first try!”

Sarah J. (Verified Buyer)

Download Free Demo PDF Free CISSP Practice Test
Shopping Cart
Scroll to Top

FLASH OFFER

Days
Hours
Minutes
Seconds

avail $6 DISCOUNT on YOUR PURCHASE