Free Practice Test

Free CC Exam Questions

Get ready for your Certified in Cybersecurity exam with our free, accurate, and 2025-updated questions.

Cert Empire is committed to providing the best and latest exam questions for those preparing for the ISC2 CC exam. To assist students, we’ve made some of our CC exam prep resources free. You can get plenty of practice with our Free CC Practice Test.

Question 1

Which of the following system hardening techniques involves reducing the attack surface by removing unnecessary software and services?

Options
A:

A. Security configuration management

B:

B. Least privilege principle

C:

C. Patch management

D:

D. Reducing the number of elements of a system

Show Answer
Correct Answer:
D. Reducing the number of elements of a system
Explanation
System hardening aims to secure a system by reducing its vulnerability. A primary method for achieving this is by minimizing the attack surface, which is the sum of all potential entry points for an attacker. The technique of "reducing the number of elements of a system" directly accomplishes this by removing any software, services, user accounts, or open network ports that are not essential for the system's function. Each removed element eliminates a potential vector for attack, thereby simplifying security management and strengthening the system's overall defensive posture. This principle is also known as providing the "least functionality."
Why Incorrect Options are Wrong

A. Security configuration management is the overall process of establishing and maintaining secure settings, which includes reducing elements, but it is not the specific technique itself.

B. The least privilege principle is an access control concept that grants users or processes only the minimum permissions necessary, not about removing system components.

C. Patch management is the process of applying updates to fix vulnerabilities in existing software, rather than removing the software or services.

References

1. National Institute of Standards and Technology (NIST). (2008). Special Publication 800-123: Guide to General Server Security. Section 3.2, "Server Hardening," Paragraph 1. "One of the primary principles of server hardening is to provide only the minimum necessary functionality... This involves removing all unneeded software, services, and utilities from the server."

2. National Institute of Standards and Technology (NIST). (2020). Special Publication 800-53 Revision 5: Security and Privacy Controls for Information Systems and Organizations. Control Family: Configuration Management, Control ID: CM-7, "Least Functionality." The control requires organizations to "[configure] the system to provide only essential capabilities" and "[prohibit] or [restrict] the use of... functions, ports, protocols, and/or services."

3. Saltzer, J. H., & Schroeder, M. D. (1975). The Protection of Information in Computer Systems. Proceedings of the IEEE, 63(9), 1278โ€“1308. https://doi.org/10.1109/PROC.1975.9939. This foundational paper discusses the principle of "Economy of mechanism," which supports keeping system design as simple and small as possible, aligning with the concept of reducing elements to improve security.

Question 2

Which of the following principles states that individuals should be held to a standard of doing what a reasonable person would do under similar circumstances?
Options
A: Separation of duties
B: Due diligence
C: Due care
D: Least privilege
Show Answer
Correct Answer:
Due care
Explanation
Due care is the legal and ethical principle that describes the standard of conduct expected of a reasonable person under specific circumstances. In information security, it means taking the necessary, ongoing actions to protect assets and mitigate risks. This standard requires individuals and organizations to act prudently and responsibly to avoid causing harm or loss, which directly aligns with the "reasonable person" test mentioned in the question.
Why Incorrect Options are Wrong

A. Separation of duties is a security control that divides a critical task among multiple individuals to prevent fraud or error, not a standard of conduct.

B. Due diligence refers to the preparatory investigation and research conducted before taking an action to identify potential risks and liabilities.

D. Least privilege is an access control principle that ensures users are only granted the minimum level of access necessary to perform their job functions.

References

1. Cornell Law School, Legal Information Institute (LII). "Due Care." The LII, a reputable academic source, defines due care as: "The degree of care that a reasonable person would exercise under the same or similar circumstances." This provides the foundational legal definition.

Source: https://www.law.cornell.edu/wex/duecare

2. NIST Special Publication 800-161 Revision 1, Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations. This official publication distinguishes between the two key concepts.

Section 2.3.2, "Due Diligence and Due Care," states: "Due care is the prudent and responsible execution of the duties and responsibilities associated with a given role or position." This directly supports the concept of ongoing, reasonable action.

3. (ISC)ยฒ. Official (ISC)ยฒ Guide to the CISSP CBK. 6th Edition. CRC Press, 2022. This is an official vendor document for a foundational cybersecurity certification whose concepts are shared with the CC.

Chapter 3, "Security Governance Principles," defines due care as "the standard of care that a reasonable person is expected to exercise in all activities that could potentially harm others." It explicitly contrasts this with due diligence, which is defined as the "process of investigation."

Question 3

What is the primary objective of a Business Continuity Plan (BCP) in the context of incident response, business continuity, and disaster recovery concepts?

Options
A:

A. To ensure the organization can continue to operate during and after a disaster or major incident

B:

B. To focus solely on preventing incidents from occurring

C:

C. To avoid implementing any recovery strategies

D:

D. To disregard the need for a coordinated response to a major incident

Show Answer
Correct Answer:
A. To ensure the organization can continue to operate during and after a disaster or major incident
Explanation
The primary objective of a Business Continuity Plan (BCP) is to ensure that an organization's critical business functions can be maintained or restored in a timely manner during and after a disruptive event. The BCP outlines the procedures and instructions an organization must follow to continue operating. It focuses on the business processes and how to keep them running, distinguishing it from a Disaster Recovery Plan (DRP), which focuses more on restoring IT infrastructure and data after a disaster. The ultimate goal is to minimize operational downtime and the financial impact of the disruption.
Why Incorrect Options are Wrong

B. Focusing solely on prevention is the domain of risk management and security controls, not business continuity, which plans for events that have already occurred.

C. A BCP is fundamentally composed of recovery strategies for critical business processes; avoiding them would defeat its entire purpose.

D. A BCP is a core component of a coordinated response, providing the framework and procedures needed to manage a major incident effectively.

References

1. National Institute of Standards and Technology (NIST). (2010). Special Publication 800-34 Rev. 1, Contingency Planning Guide for Federal Information Systems. Section 2.2, Business Continuity Plan (BCP), Page 7. "The BCP focuses on sustaining an organizationโ€™s mission/business processes during and after a disruption."

2. International Organization for Standardization. (2019). ISO 22301:2019 Security and resilience โ€” Business continuity management systems โ€” Requirements. Section 1, Scope. The standard specifies requirements to "plan, establish, implement, operate, monitor, review, maintain and continually improve a documented management system to protect against, reduce the likelihood of occurrence, prepare for, respond to, and recover from disruptive incidents when they arise."

3. Whitman, M. E., & Mattord, H. J. (2019). Principles of Information Security (6th ed.). Cengage Learning. Chapter 5, "Planning for Contingencies." The text defines business continuity planning as the process that "ensures that critical business functions can continue if a disaster occurs."

Question 4

What type of factor is a callback to a mobile phone?
Options
A: Somewhere you are
B: Something you are
C: Something you have
D: Something you know
Show Answer
Correct Answer:
Something you have
Explanation
Authentication factors are categorized based on how they prove an identity. A callback to a mobile phone is a method that verifies the user is in possession of a specific, pre-registered device. The mobile phone is a physical object that the user possesses. Therefore, this method falls under the "Something you have" category. This is a form of out-of-band authentication where the possession of the communication device (the phone) is the factor being validated.
Why Incorrect Options are Wrong

A. Somewhere you are: This is incorrect because it refers to authentication based on the user's physical location (geolocation), not their possession of an object.

B. Something you are: This is incorrect as it pertains to inherent biological traits (biometrics) like a fingerprint or iris scan, not a physical device.

D. Something you know: This is incorrect because it refers to secret information like a password or PIN, not a tangible item that the user possesses.

---

References

1. National Institute of Standards and Technology (NIST). (June 2017). Special Publication (SP) 800-63B: Digital Identity Guidelines: Authentication and Lifecycle Management.

Section 4.2.3, "Out-of-Band Authenticators," Page 21: This section describes authenticators that use a communication channel separate from the primary one (e.g., a phone call). It explicitly states, "The out-of-band device is a 'something you have' factor." A mobile phone used for a callback is a classic example of such a device.

2. Ometov, A., et al. (2018). "A Survey on Multi-Factor Authentication for the Internet of Things." Sensors, 18(1), 175.

Section 2.1, "Authentication Factors," Paragraph 3: The authors define the possession factor: "The possession factor (something you have) implies that a user has a certain item in his/her possession, e.g., a smart card, a mobile phone, or a physical key." This peer-reviewed article directly classifies a mobile phone as a "something you have" factor.

DOI: https://doi.org/10.3390/s18010175

3. University of California, Berkeley. (Fall 2020). CS 161: Computer Security, Lecture 10: Authentication.

Slide 10, "Factors of Authentication": The course material categorizes authentication factors and provides examples. Under the "Something you have" category, it lists "Physical key," "Smartcard," and "Cell phone (for 2FA)," confirming that a mobile phone used in an authentication process is considered a possession factor.

Question 5

Which of the following documents establishes context and sets out strategic direction and priorities?

Options
A:

A. Regulations

B:

B. Standards

C:

C. Procedures

D:

D. Policies

Show Answer
Correct Answer:
D. Policies
Explanation
Policies are high-level, formal documents that establish management's intent, expectations, and strategic direction for security within an organization. They define the scope of the security program, assign responsibilities, and state the organization's position on specific issues. By setting these overarching goals and principles, policies provide the necessary context and authority for the creation of more detailed standards, procedures, and guidelines. They answer the "what" and "why" of security, thereby setting the strategic priorities for the entire enterprise.
Why Incorrect Options are Wrong

A. Regulations: These are mandatory requirements imposed by external governmental or legal bodies, not an organization's internally-developed strategic direction.

B. Standards: These are mandatory, specific requirements for technology or processes that support policies; they are tactical, not strategic.

C. Procedures: These are detailed, step-by-step instructions for performing a task; they are operational and represent the lowest level of documentation.

References

1. National Institute of Standards and Technology (NIST) Special Publication 800-12 Revision 1, An Introduction to Information Security. Section 4.1, "Policy, Standards, and Practices," states: "Policies are the high-level documents that set the strategic direction, course, and tone for an organizationโ€™s security program." (Page 27, Paragraph 2).

2. National Institute of Standards and Technology (NIST) Special Publication 800-100, Information Security Handbook: A Guide for Managers. Section 2.2, "Security Policy," describes policy as the "foundation of a security program" and notes that it "sets the strategic direction for security." (Page 10, Paragraph 1).

3. University of California, Berkeley, Information Security Office, Policy Program. The documentation on "Policy, Standard, Guideline, and Procedure Definitions" states: "A policy is a statement of intent and is implemented as a procedure or protocol. Policies are the 'what' and the 'why'." This aligns with the strategic, context-setting role of policies.

Question 6

Which of the following security measures is most effective in protecting PII stored on a laptop in case of theft?
Options
A: Regularly updating antivirus software
B: Using strong passwords
C: Enabling a firewall
D: Full-disk encryption
Show Answer
Correct Answer:
Full-disk encryption
Explanation
Full-disk encryption (FDE) is the most effective security measure for protecting Personally Identifiable Information (PII) on a stolen laptop. FDE encrypts the entire contents of the storage drive, rendering the data unreadable to anyone without the correct authentication key (e.g., a password or PIN). In the event of theft, an attacker with physical possession of the laptop cannot bypass the operating system's login screen or remove the hard drive to access the files on another machine. The PII remains confidential and inaccessible because it is cryptographically protected at rest.
Why Incorrect Options are Wrong

A. Regularly updating antivirus software: Antivirus software protects against malware infections while the system is running; it offers no protection against data access if the device is stolen and the drive is accessed directly.

B. Using strong passwords: A strong OS password can be bypassed by an attacker with physical access, for example, by booting from an external drive or by removing the storage drive and mounting it in another computer.

C. Enabling a firewall: A firewall protects a device from unauthorized network traffic. It is irrelevant to protecting data stored locally on a device that has been physically stolen.

References

1. National Institute of Standards and Technology (NIST) Special Publication 800-111, Guide to Storage Encryption Technologies for End User Devices. Section 2.1, "Threats," explicitly lists "Loss or theft of the device" as a primary threat. The document states, "If an unencrypted device is lost or stolen, the data on it is completely accessible to whomever has the device." Section 3.1, "Full Disk Encryption," is presented as the primary solution to this threat.

2. Microsoft Documentation, BitLocker overview. The official documentation for Microsoft's FDE solution states, "BitLocker helps mitigate unauthorized data access by enhancing file and system protections. BitLocker helps render data inaccessible when BitLocker-protected computers are decommissioned or recycled." This directly addresses the scenario where the device is no longer in the authorized user's possession, such as in a theft.

3. Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in Computing (5th ed.). This is a standard textbook used in university computer science curricula. Chapter 5.4, "Encryption," discusses its use in protecting stored data: "Encryption is a primary way to protect data in storage... Full disk encryption... means that the entire disk, including all system and user files, is encrypted. A user must enter a password to boot the computer, and that password decrypts the disk." This highlights its effectiveness against physical access threats.

Question 7

What is the cloud computing model where customers share computing infrastructure without knowing each other's identity?

Options
A:

A. Community cloud

B:

B. Private cloud

C:

C. Shared cloud

D:

D. Public cloud

Show Answer
Correct Answer:
D. Public cloud
Explanation
The public cloud model is defined by its multi-tenant architecture, where a cloud service provider makes computing resources available to the general public over the internet. The underlying physical infrastructure is owned and operated by the provider and is shared among numerous customers, known as tenants. These tenants are logically isolated from one another, operate independently, and are unaware of the other organizations or individuals sharing the same hardware. This model leverages economies of scale to offer services on a pay-as-you-go basis.
Why Incorrect Options are Wrong

A. Community cloud: This model is shared by a specific group of organizations with common goals, so tenants are known within the community.

B. Private cloud: This infrastructure is dedicated to a single organization, so there is no sharing with external, unknown customers.

C. Shared cloud: This is a general descriptive term, not one of the four standard deployment models (Public, Private, Community, Hybrid) defined by NIST.

References

1. Mell, P., & Grance, T. (2011). The NIST Definition of Cloud Computing (Special Publication 800-145). National Institute of Standards and Technology. Retrieved from https://doi.org/10.6028/NIST.SP.800-145.

Page 3, Section 2, "Deployment Models": "Public cloud: The cloud infrastructure is provisioned for open use by the general public... It exists on the premises of the cloud provider." This definition underpins the concept of shared infrastructure among unknown parties.

2. Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., ... & Zaharia, M. (2009). A View of Cloud Computing (Technical Report No. UCB/EECS-2009-28). EECS Department, University of California, Berkeley.

Page 2, Section 2, "Defining Cloud Computing": Describes a Public Cloud as a resource available to the general public on a pay-as-you-go basis, contrasting it with a Private Cloud which is internal to an organization. This highlights the "general public" aspect where tenants are not pre-associated.

3. Carnegie Mellon University, School of Computer Science. (n.d.). 15-319/15-619 Cloud Computing, Lecture 2: Cloud Models and Architectures.

In course materials covering cloud deployment models, the Public Cloud is consistently defined as a multi-tenant environment where resources are shared by a diverse and anonymous customer base, managed by a third-party provider.

Question 8

Which type of network attack involves an attacker sending specially crafted malicious data to an application or system, causing it to crash or become unresponsive?
Options
A: SQL Injection Attack
B: On Path Attack
C: Distributed Denial-of-Service Attack
D: Buffer Overflow Attack
Show Answer
Correct Answer:
Buffer Overflow Attack
Explanation
A buffer overflow attack is a specific type of software vulnerability exploitation where an attacker sends more data to a memory buffer than it is designed to handle. This excess data overwrites adjacent memory regions, which can corrupt data, crash the program, or create an opening for executing malicious code. The attack relies on sending "specially crafted malicious data"โ€”input that is intentionally too largeโ€”to cause the target application to become unstable or unresponsive, which directly aligns with the question's description.
Why Incorrect Options are Wrong

A. SQL Injection Attack: This attack targets the back-end database by inserting malicious SQL statements into an entry field, aiming for data theft or manipulation, not crashing the application with malformed data.

B. On Path Attack: This involves intercepting and potentially altering communications between two parties to eavesdrop or impersonate, not directly attacking an application to make it crash.

C. Distributed Denial-of-Service Attack: This attack uses a high volume of traffic from multiple sources to overwhelm a system's resources (like bandwidth or CPU), not a single piece of crafted data to exploit a software flaw.

References

1. Kuperman, B. A., et al. (2005). A Taxonomy of Buffer Overflows. University of Virginia, Department of Computer Science. Technical Report CS-2005-14. In Section 2, "Background," the report states, "When the buffer is overfilled, the excess data 'spills over' into adjacent memory, overwriting whatever data had been there... At a minimum, this memory corruption can cause the program to crash."

2. MITRE. (2023). CWE-120: Buffer Copy without Checking Size of Input ('Classic Buffer Overflow'). Common Weakness Enumeration. The "Consequences" section notes that a primary technical impact is "Availability: The application may crash or be in a state where it is not usable."

3. Erickson, J. (2008). Hacking: The Art of Exploitation, 2nd Edition. No Starch Press. Chapter 3, "Exploitation," Section "Stack-Based Buffer Overflows," pp. 86-87, describes how writing past a buffer's boundaries can overwrite critical program data on the stack, leading to a segmentation fault and causing the program to crash. (Note: While a commercial book, its author is a recognized academic and it is used as courseware in many universities).

4. Aleph One. (1996). Smashing The Stack For Fun And Profit. Phrack Magazine, Volume 7, Issue 49. This foundational paper on the topic explains in Section 4, "Stack-based buffer overruns," how overflowing a buffer corrupts the stack, which typically results in a "Segmentation violation" error, terminating the program. This is a seminal, peer-reviewed publication in the security community.

Question 9

What is the term for the random value added to a password to prevent rainbow table attacks?

Options
A:

A. Salt

B:

B. Extender

C:

C. MD5

D:

D. Hash

Show Answer
Correct Answer:
A. Salt
Explanation
A salt is a cryptographically random value concatenated with a password before hashing. Because each passwordโ€™s salt is unique and stored with the hash, pre-computed lookup tables (rainbow tables) cannot be reused: an attacker would have to build a new table for every possible salt, rendering the attack impractical.
Why Incorrect Options are Wrong

B. Extender โ€“ Not a security term in password hashing; no role against rainbow tables.

C. MD5 โ€“ A hash algorithm, not the random value added; MD5 itself can be used with or without salting.

D. Hash โ€“ The fixed-length output produced after hashing; it is the result, not the random value added.

References

1. NIST Special Publication 800-63B, โ€œDigital Identity Guidelines: Authentication and Lifecycle Management,โ€ ยง5.1.1.2, p. 18 (June 2017).

2. Philip Oechslin, โ€œMaking a Faster Cryptanalytic Time-Memory Trade-Off,โ€ Advances in Cryptology โ€“ CRYPTO 2003, LNCS 2775, pp. 617-630 (2003). DOI:10.1007/978-3-540-45146-436

3. MIT OpenCourseWare, 6.857 โ€œNetwork and Computer Security,โ€ Lecture 5 slides, โ€œPassword Hashing and Salting,โ€ slides 7-9 (Spring 2014).

4. National Research Council, โ€œCryptographyโ€™s Role in Securing the Information Society,โ€ Chapter 5, ยงโ€œPassword File Protection,โ€ p. 110 (1996).

Question 10

A security analyst discovers a vulnerability in a client's system but decides to withhold the information, fearing negative publicity for the client. Which ISC2 Code of Ethics Canon has the analyst potentially violated?
Options
A: Advance and protect the profession
B: Act honorably, honestly, justly, responsibly, and legally
C: Protect society, the common good, necessary public trust and confidence, and the infrastructure
D: Provide diligent and competent service to principals
Show Answer
Correct Answer:
Provide diligent and competent service to principals
Explanation
The analyst has a direct professional obligation to their client, who is the "principal" in this context. The primary role of a security analyst is to identify and report security weaknesses to enable remediation. By intentionally withholding information about a discovered vulnerability, the analyst is failing to provide the diligent and competent service for which they were engaged. This action directly undermines the client's security and represents a fundamental breach of the analyst's professional duty to their principal.
Why Incorrect Options are Wrong

A. The primary harm is to the client's security, not a direct action against the reputation of the security profession itself.

B. While the action is dishonest, Canon III is more specific as it directly addresses the failure in professional service owed to a client.

C. The immediate duty violated is to the client (principal), not directly to society, although an exploited vulnerability could eventually harm the public.

References

1. (ISC)ยฒ. (2023). ISC2 Code of Ethics. Retrieved from https://www.isc2.org/Ethics. The canon states, "Provide diligent and competent service to principals." Withholding critical security information is a direct failure to meet this standard.

2. Chapple, M., Seidl, D., & St. Germain, J. (2023). Official (ISC)ยฒ Certified in Cybersecurity (CC) Study Guide. Wiley. In Chapter 1, "Security Principles," the explanation for Canon III emphasizes providing high-quality work for employers and clients (principals). The text states, "This means you should always strive to provide high-quality work for your employers and clients" (p. 13). Concealing a vulnerability is the antithesis of providing high-quality, diligent service.

Question 11

Which of the following is a key component of a Business Continuity Plan (BCP)?

Options
A:

A. Avoiding the use of offsite facilities or alternative work locations

B:

B. Focusing solely on the prevention of incidents, rather than maintaining operations during a disruption

C:

C. Ignoring the need for an incident response plan

D:

D. Developing strategies to maintain essential operations during and after a major incident

Show Answer
Correct Answer:
D. Developing strategies to maintain essential operations during and after a major incident
Explanation
A Business Continuity Plan (BCP) is a strategic document that outlines the procedures an organization will follow to maintain its essential functions during and after a significant disruption. The core purpose of a BCP is to ensure operational resilience. This involves identifying critical business processes, the resources required to support them, and developing strategies to continue operations with minimal downtime and impact. The plan is proactive, focusing on how to sustain the business's mission-critical activities rather than just responding to the immediate incident or preventing it from occurring.
Why Incorrect Options are Wrong

A. BCPs frequently mandate the use of offsite facilities or alternative work locations as a key strategy for maintaining operations when a primary site is unavailable.

B. While prevention is part of a broader risk management strategy, a BCP's specific focus is on continuity and recovery during and after an incident has occurred.

C. An Incident Response Plan (IRP) is a critical, complementary plan that a BCP relies upon; a BCP does not ignore it but works in conjunction with it.

References

1. National Institute of Standards and Technology (NIST). (2010). Special Publication 800-34 Rev. 1: Contingency Planning Guide for Federal Information Systems. Section 2.2, "Contingency Planning," states, "The BCP focuses on sustaining an organizationโ€™s mission/business processes during and after a disruption." (p. 7).

2. Wilson, M., & Hash, J. (2011). Defining and Using a Business Continuity Strategy (CMU/SEI-2011-TN-020). Carnegie Mellon University. Section 2, "What Is a Business Continuity Strategy?," defines it as "an approach by an organization that will ensure that its functions can be recovered and resumed in the event of a disruption." (p. 2).

3. Herbane, B. (2010). The evolution of business continuity management: A historical review of practices and drivers. Business History, 52(6), 978-1002. The article defines BCP as a process "to ensure that critical business functions can continue during and after a disaster." (p. 980). https://doi.org/10.1080/00076791.2010.511185

Question 12

What is the PRIMARY difference between a threat and a vulnerability?
Options
A: A threat is a weakness in a system, while a vulnerability is a potential source of harm
B: A threat is an attack method, while a vulnerability is a type of threat actor
C: A threat is a type of attacker, while a vulnerability is an attack method
D: A threat is a potential source of harm, while a vulnerability is a weakness in a system
Show Answer
Correct Answer:
A threat is a potential source of harm, while a vulnerability is a weakness in a system
Explanation
In cybersecurity, a threat is any circumstance or event with the potential to cause harm to an information system or organization. It represents the potential danger. A vulnerability is a weakness or flaw in a system's design, implementation, or controls that could be exploited by a threat. The relationship is that a threat exploits a vulnerability to cause an adverse impact. For example, a malicious actor (threat source) represents a threat, and they might exploit an unpatched software flaw (vulnerability) to gain unauthorized access.
Why Incorrect Options are Wrong

A. This option incorrectly reverses the standard definitions of a threat and a vulnerability.

B. This option is incorrect; an attack method is a technique used to exploit a vulnerability, and a threat actor is a source of a threat.

C. This option is incorrect; a threat is the potential for harm, not the attacker itself, and a vulnerability is a weakness, not an attack method.

References

1. National Institute of Standards and Technology (NIST). (2012). Guide for Conducting Risk Assessments (NIST Special Publication 800-30, Revision 1).

Threat: Defined as "Any circumstance or event with the potential to adversely impact organizational operations and assets, individuals, other organizations, or the Nation through an information system..." (Section 2.2.1, Page 7).

Vulnerability: Defined as a "Weakness in an information system, system security procedures, internal controls, or implementation that could be exploited by a threat source" (Section 2.2.3, Page 9).

2. Zeldovich, N., & Kaashoek, F. (2014). 6.858 Computer Systems Security, Fall 2014. Massachusetts Institute of Technology: MIT OpenCourseWare.

The course materials distinguish these terms, defining a threat as a "potential violation of security" and a vulnerability as a "flaw in a system that allows a threat to be realized" (Lecture 1: Introduction, Slide 11).

Question 13

How does a Business Impact Analysis (BIA) contribute to the disaster recovery planning process?

Options
A:

A. By avoiding the consideration of potential impacts on the organization

B:

B. By identifying the critical systems and processes that must be prioritized

C:

C. By disregarding the need for a coordinated response to a disaster

D:

D. By focusing solely on preventing disasters from occurring

Show Answer
Correct Answer:
B. By identifying the critical systems and processes that must be prioritized
Explanation
A Business Impact Analysis (BIA) is a foundational component of business continuity and disaster recovery planning. Its primary function is to identify the organization's mission-critical functions and the systems, processes, and resources that support them. By quantifying the potential operational and financial impacts of a disruption over time, the BIA enables the organization to prioritize its recovery efforts. This prioritization is crucial for developing an effective disaster recovery plan (DRP), as it dictates the sequence of restoration and helps establish key metrics like Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs) for the most vital assets.
Why Incorrect Options are Wrong

A. A BIA's central purpose is to consider and quantify potential impacts on the organization, not avoid them.

C. The BIA provides the essential data needed to create a structured and coordinated disaster response, not disregard it.

D. A BIA analyzes the impacts of a disaster after it occurs, whereas disaster prevention focuses on stopping it from happening.

References

1. National Institute of Standards and Technology (NIST) Special Publication 800-34, Rev. 1, Contingency Planning Guide for Federal Information Systems.

Section 3.2, Business Impact Analysis, Page 15: "The BIA is a key step in the contingency planning process... The BIA helps to identify and prioritize information systems and components critical to supporting the organizationโ€™s mission/business processes. The BIA results are used to guide the selection of appropriate contingency strategies..." This directly supports that the BIA's role is to identify and prioritize critical systems.

2. International Organization for Standardization (ISO) 22301:2019, Security and resilience โ€” Business continuity management systems โ€” Requirements.

Clause 8.2.2, Business impact analysis and risk assessment: The standard requires that the organization shall "identify the impacts that a disruption would have on the organization" and use this information to "determine priorities for its business continuity strategy and solutions." This confirms the BIA's role in prioritization for recovery.

3. Herbane, B. (2010). The evolution of business continuity management: A historical review of practices and drivers. Business History, 52(6), 978-1002.

Page 986: "The BIA is a management level analysis that identifies the impacts of losing business resources... The BIA provides the data from which the appropriate business continuity strategies can be determined and implemented. It is the key to defining what is critical and what is not." This academic source reinforces that the BIA's core contribution is identifying and prioritizing critical functions. (DOI: https://doi.org/10.1080/00076791.2010.511185)

Question 14

Which of the following is a key component of a Disaster Recovery Plan (DRP)?
Options
A: Focusing solely on the prevention of disasters, rather than recovery efforts
B: Establishing clear roles and responsibilities for personnel during disaster recovery efforts
C: Ignoring the need for data backups and redundancy
D: Avoiding the use of offsite backup facilities or cloud-based services
Show Answer
Correct Answer:
Establishing clear roles and responsibilities for personnel during disaster recovery efforts
Explanation
A Disaster Recovery Plan (DRP) is a documented, structured approach that describes how an organization can quickly resume work after an unplanned incident. A fundamental component of any effective DRP is the clear assignment of roles and responsibilities. This ensures that during a crisis, personnel understand their specific duties, the chain of command, and communication protocols. This structure prevents confusion and chaos, enabling a coordinated and efficient response to restore critical IT functions. Without clearly defined roles, recovery efforts would be significantly delayed and disorganized, undermining the entire purpose of the plan.
Why Incorrect Options are Wrong

A. A DRP's primary focus is on the recovery from a disaster, not solely on prevention, which is a component of a broader business continuity strategy.

C. Data backups and redundancy are the absolute cornerstones of a DRP; ignoring them would render recovery impossible.

D. Using offsite or cloud-based facilities is a best practice and a critical strategy for a robust DRP to protect data from localized disasters.

References

1. National Institute of Standards and Technology (NIST) Special Publication 800-34 Rev. 1, Contingency Planning Guide for Federal Information Systems. Appendix C, "Contingency Plan Template," Section C-3.3, "Roles and Responsibilities," explicitly states the need to "Identify and document the roles and responsibilities of all personnel and teams who will be involved in the execution of this plan." (Page C-3).

2. National Institute of Standards and Technology (NIST) Special Publication 800-53 Rev. 5, Security and Privacy Controls for Information Systems and Organizations. The control CP-2, "Contingency Plan," requires the plan to identify "key personnel and their roles and responsibilities." This highlights that defining roles is a mandatory part of the standard for contingency planning. (Page 198).

3. Carnegie Mellon University, Disaster Recovery Plan Template. This university resource provides a template for creating a DRP. Section 2.0, "Disaster Recovery Team," is dedicated to defining the "roles and responsibilities of each team member," emphasizing its importance in the planning process. (Section 2.0).

Question 15

Which of the following types of information is considered PII?
Options
A: A network topology diagram
B: A corporate policy document
C: A public blog post
D: A user's date of birth
Show Answer
Correct Answer:
A user's date of birth
Explanation
Personally Identifiable Information (PII) is any data that can be used to distinguish or trace an individual's identity, either alone or when combined with other personal or identifying information. A user's date of birth is a classic example of PII because it is a specific attribute directly linked to a unique individual. This piece of information, especially when combined with a name, is frequently used for identification and authentication purposes and is protected under various privacy regulations. The other options represent corporate or public information, not data tied to a specific person's identity.
Why Incorrect Options are Wrong

A network topology diagram details the structure of a computer network; it is corporate intellectual property, not personal data.

A corporate policy document contains organizational rules and guidelines and does not contain information that identifies a specific individual.

A public blog post is information intentionally made available to the public and is not considered a form of PII itself.

References

1. National Institute of Standards and Technology (NIST). (2010). Special Publication 800-122: Guide to Protecting the Confidentiality of Personally Identifiable Information (PII). Section 2.1, Definition of PII, Page 7. The document states, "...information that can be used to distinguish or trace an individualโ€™s identity, such as name, social security number, date and place of birth...".

2. The European Parliament and the Council of the European Union. (2016). Regulation (EU) 2016/679 (General Data Protection Regulation). Article 4(1). This article defines 'personal data' as any information relating to an identifiable natural person, which includes demographic information like a date of birth as a factor specific to that person's identity.

3. University of California, Berkeley. (n.d.). What is PII and personal data? Berkeley Security. Retrieved from https://security.berkeley.edu/resources/what-pii-and-personal-data. The resource explicitly lists "Date of birth" under the category of "Personally Identifiable Information (PII) Examples."

Question 16

What term is used to describe phishing attacks that specifically target company administrators?
Options
A: Piranha attacks
B: Whaling attacks
C: Shark attacks
D: Barracuda attacks
Show Answer
Correct Answer:
Whaling attacks
Explanation
Whaling is a specific and highly targeted form of phishing attack aimed at senior executives, administrators, or other high-profile individuals within an organization. These individuals are considered "whales" or high-value targets due to their privileged access to sensitive data, financial systems, and corporate resources. The attacks are typically more sophisticated and personalized than standard phishing campaigns, often using social engineering techniques to craft a believable scenario that tricks the target into compromising security, authorizing fraudulent payments, or revealing confidential information.
Why Incorrect Options are Wrong

A. Piranha attacks: This is not a recognized or standard term in cybersecurity literature for a type of phishing attack.

C. Shark attacks: This is not a standard cybersecurity term used to describe a specific category of phishing.

D. Barracuda attacks: This is not a standard term for a type of attack; Barracuda is the name of a prominent cybersecurity company.

References

1. University of Washington, UW-IT Connect. (n.d.). Phishing examples. Retrieved from https://itconnect.uw.edu/security/personal-safety/phishing-examples/. In the section "Types of phishing," the document defines Whaling: "Whaling is a phishing attack that targets a high-profile employee, such as a CEO or CFO. The content of a whaling email is often written as a legal subpoena, customer complaint, or executive issue."

2. Al-Suwaidi, N., Al-Balushi, T., Al-Marridi, A., & Al-Shehhi, A. (2022). Phishing attacks and their countermeasures: a survey. International Journal of Information Technology, 14(6), 3069โ€“3081. https://doi.org/10.1007/s41870-022-01021-z. In Section 3, "Types of Phishing Attacks," the paper states, "Whaling is a type of phishing attack that targets high-profile employees, such as CEOs and CFOs, to steal sensitive information from a company, as these employees have complete access to sensitive information." (p. 3071).

3. Carnegie Mellon University, Information Security Office. (n.d.). Phishing. Retrieved from https://www.cmu.edu/iso/aware/phishing/index.html. The page describes various phishing types, noting, "Whaling is a type of phishing that targets high-level executives. The goal is to trick an individual into revealing sensitive information, such as login credentials, or performing a secondary action, such as a wire transfer."

Question 17

Which of the following is an example of a measure to protect confidentiality?
Options
A: Access controls and encryption
B: Backup systems and fault tolerance
C: Digital signatures and checksums
D: Routers and VLANs (Virtual Local Area Network)
Show Answer
Correct Answer:
Access controls and encryption
Explanation
Confidentiality is the security principle that ensures information is not disclosed to unauthorized individuals, entities, or processes. Access controls, such as user IDs, passwords, and permissions, directly enforce this by restricting who can view or interact with data. Encryption renders data unreadable (ciphertext) to anyone who does not possess the correct decryption key, thereby protecting its confidentiality even if the data is intercepted or stolen. Together, these are fundamental and primary measures for maintaining the confidentiality of information.
Why Incorrect Options are Wrong

B. Backup systems and fault tolerance are primary measures for ensuring Availability, which guarantees that systems and data are accessible when needed.

C. Digital signatures and checksums are primary measures for ensuring Integrity and non-repudiation, verifying that data has not been altered and confirming its origin.

D. Routers and VLANs are networking technologies. While they can be configured to help enforce security policies (e.g., segmentation), they are not, by themselves, direct confidentiality measures like encryption.

References

1. Stallings, W., & Brown, L. (2018). Computer Security: Principles and Practice (4th ed.). Pearson. In Chapter 1, Section 1.2, "Key Security Concepts," confidentiality is defined as "Preserving authorized restrictions on information access and disclosure...". The text explicitly lists encryption and access control as mechanisms to achieve this.

2. National Institute of Standards and Technology (NIST). (2020). Security and Privacy Controls for Information Systems and Organizations (NIST Special Publication 800-53, Revision 5). In Appendix D, "Security Control Baselines," the controls selected for the "Confidentiality" baseline heavily feature the Access Control (AC) and System and Communications Protection (SC) families, the latter of which specifies cryptographic protections (encryption).

3. Saltzer, J. H., & Schroeder, M. D. (1975). The Protection of Information in Computer Systems. Communications of the ACM, 18(7), 387โ€“408. This foundational paper on computer security outlines design principles for protecting information. Principle 4, "Least Privilege," and the extensive discussion on access control mechanisms (Section III.A, "Access Control Mechanisms") directly address the methods for preventing unauthorized information disclosure (confidentiality). https://doi.org/10.1145/361011.361062

Question 18

Which of the following options is NOT an access control layer?
Options
A: Technical
B: Physical
C: Administrative
D: Policy
Show Answer
Correct Answer:
Policy
Explanation
The standard framework for classifying security controls, including access controls, categorizes them into three primary types or layers: Administrative, Technical, and Physical. Administrative controls are the policies, procedures, and guidelines that govern security. Technical (or logical) controls are implemented in software and hardware, such as firewalls and access control lists. Physical controls are tangible measures like locks, fences, and security guards. A "Policy" is a crucial element within the Administrative control layer; it is not a separate, parallel layer itself. Therefore, it is the option that is NOT a distinct access control layer.
Why Incorrect Options are Wrong

A. Technical: This is a primary layer of access control, representing the logical controls implemented through technology to enforce access rights.

B. Physical: This is a fundamental layer of access control, consisting of tangible barriers and mechanisms to protect physical assets and infrastructure.

C. Administrative: This is a core layer of access control that establishes the framework of policies, procedures, and guidelines for managing security.

References

1. Cornell University. (2023). Policy 5.10, Information Security. Section 3.0, Definitions. This document defines the three control types: "Administrative Controls: Policies, standards, processes, procedures, and guidelines...," "Physical Controls," and "Technical Controls." This explicitly places policies within the administrative category.

2. University of California, Berkeley, Information Security Office. (2023). Security Controls Standard. This standard outlines the campus's approach to security, defining and providing examples for the three control categories: Administrative, Physical, and Technical.

3. National Institute of Standards and Technology (NIST). (1995). SP 800-12, An Introduction to Computer Security: The NIST Handbook. Chapter 3, "Security Controls," discusses the different classes of controls, which are broadly grouped into management (administrative), operational, and technical categories, forming the basis for the three-layer model.

4. Whitman, M. E., & Mattord, H. J. (2019). Principles of Information Security (6th ed.). Cengage Learning. Chapter 5, "Planning for Security," details the classification of security controls into policies (part of the administrative framework), technical controls, and physical controls.

Question 19

What is the PRIMARY identity and access management function you use when providing a user ID and password?

Options
A:

A. Validation

B:

B. Authorization

C:

C. Login

D:

D. Authentication

Show Answer
Correct Answer:
D. Authentication
Explanation
Authentication is the process of verifying a claimed identity. When a user provides a user ID, they are making a claim about who they are. The password serves as a secret piece of information (a knowledge factor) to prove that claim. The system then compares these credentials against its stored records to confirm the user's identity. This verification process is the fundamental definition of authentication within the Identity and Access Management (IAM) framework.
Why Incorrect Options are Wrong

A. Validation is a generic term for checking data correctness; it is not the specific security function for verifying a user's identity.

B. Authorization is the process of granting permissions to resources, which occurs after a user has been successfully authenticated.

C. Login is a user-facing term for the overall process of gaining access, but the specific technical function being performed is authentication.

References

1. National Institute of Standards and Technology (NIST). (2017). Special Publication 800-63-3: Digital Identity Guidelines. Section 1.2, "Introduction to Digital Identity". The document states, "Authentication is the process of verifying a claimantโ€™s identity."

2. Saltzer, J. H., & Schroeder, M. D. (1975). The Protection of Information in Computer Systems. Communications of the ACM, 18(7), 377โ€“408. Section I.A.2, "Authentication". This foundational paper defines authentication as the mechanism for "associating a unique system-identifier with that principal." The use of a password is a primary example. (DOI: https://doi.org/10.1145/361011.361062)

3. Massachusetts Institute of Technology (MIT) OpenCourseWare. (2014). 6.857 Computer and Network Security, Lecture 2: Control of Information. The lecture notes clearly distinguish authentication ("Who are you?") from authorization ("What are you allowed to do?").

Question 20

Which of the following is an example of a threat vector?
Options
A: A software bug that allows unauthorized access to a system
B: A criminal hacking group targeting a specific organization
C: A phishing email that tricks users into revealing their passwords
D: A natural disaster that could damage a data center
Show Answer
Correct Answer:
A phishing email that tricks users into revealing their passwords
Explanation
A threat (or attack) vector is the specific path or mechanism an adversary uses to reach and exploit a target. Phishing e-mail delivers malicious content directly to users and, if the user complies, provides the adversary a route into the environmentโ€”therefore it is itself the vector. The other options describe a vulnerability, an actor, and a non-human threat source, none of which constitute the access path.
Why Incorrect Options are Wrong

A. Software bug is a vulnerability the adversary can exploit, not the path used to reach the system.

B. Criminal hacking group is the threat actor that uses a vector; it is not the vector itself.

D. Natural disaster is an environmental threat source causing damage, not a cyber-access path or mechanism.

References

1. NIST IR 7298 Rev.3, โ€œGlossary of Key Information Security Terms,โ€ definition of โ€œattack vector,โ€ p.10.

2. NIST SP 800-30 Rev.1, โ€œGuide for Conducting Risk Assessments,โ€ ยง2.3.2 & Figure 3 (illustrates โ€˜attack vectorโ€™ as delivery mechanism), pp.19-20.

3. Hutchins, E., Cloppert, M., & Amin, R. (2011). โ€œIntelligence-Driven Computer Network Defenseโ€ฆ,โ€ Proc. 6th Intโ€™l Conf. on Information Warfare, p.6 (phishing e-mail cited as delivery/attack vector). DOI:10.13140/RG.2.1.1058.8089

4. Cisco Talos โ€œThreat Vectors Explainedโ€ white-paper, Section 2, para 1 (phishing identified as common threat vector), 2020.

Question 21

In the context of risk management, what is the purpose of risk mitigation?
Options
A: To avoid the need for a risk management process
B: To implement controls and countermeasures that reduce the likelihood or impact of identified risks
C: To disregard potential risks and their impacts
D: To focus solely on reactive measures
Show Answer
Correct Answer:
To implement controls and countermeasures that reduce the likelihood or impact of identified risks
Explanation
Risk mitigation is a fundamental risk response strategy within the risk management process. Its purpose is to reduce the level of risk to an acceptable level by decreasing the probability of a threat occurring or minimizing the resulting adverse impact. This is achieved through the selection and implementation of appropriate security controls, safeguards, and countermeasures. Mitigation is a proactive approach to managing identified risks, rather than ignoring them or waiting for an incident to occur before acting.
Why Incorrect Options are Wrong

A. Risk mitigation is a core component of the risk management process; it does not eliminate the need for the overall process.

C. Disregarding risks is the opposite of risk management. This approach is often termed risk ignorance and is not a valid strategy.

D. Risk mitigation includes proactive (preventive) controls to reduce risk likelihood, not just reactive measures taken after an incident.

References

1. National Institute of Standards and Technology (NIST). (2018). Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy (NIST Special Publication 800-37, Rev. 2). Section 2.5, Risk Response, Page 15. "Common risk responses include... risk mitigation (i.e., applying safeguards and countermeasures to reduce the risk to a level that is acceptable to the organization)."

2. National Institute of Standards and Technology (NIST). (2012). Guide for Conducting Risk Assessments (NIST Special Publication 800-30, Rev. 1). Appendix G, Section G.3, Page G-2. "Mitigating risk by implementing new controls or strengthening existing controls to reduce the likelihood of a threat event occurring and/or the adverse impact of such an event."

3. Carbert, A. (2010). OCTAVE Allegro: Information Security Risk Assessment (CMU/SEI-2010-TR-012). Carnegie Mellon University, Software Engineering Institute. Step 7, Page 53. "Mitigate โ€“ Take actions to reduce the likelihood of the threat occurring or the impact to the critical asset."

Question 22

What is the primary goal of an Advanced Persistent Threat (APT) attack?
Options
A: To disrupt network services and cause downtime
B: To exploit vulnerabilities in web applications for financial gain
C: To gain unauthorized access to sensitive data and maintain a long-term presence in the target network
D: To spread quickly and infect as many systems as possible
Show Answer
Correct Answer:
To gain unauthorized access to sensitive data and maintain a long-term presence in the target network
Explanation
An Advanced Persistent Threat (APT) is a sophisticated, targeted cyberattack where an intruder establishes an undetected, long-term presence within a network. The primary objective is not immediate disruption but to continuously monitor and exfiltrate sensitive data over an extended period. The "persistent" aspect is key; attackers aim to maintain access for future operations, espionage, or strategic advantage. This contrasts with attacks focused on short-term goals like causing downtime or rapid, widespread infection. The targeted and stealthy nature of an APT is designed specifically to achieve this long-term access and data theft.
Why Incorrect Options are Wrong

A. This describes the primary goal of a Denial-of-Service (DoS) or Distributed Denial-of-Service (DDoS) attack, which focuses on disruption rather than stealth and data exfiltration.

B. While some attacks seek financial gain, the defining characteristic of an APT is its long-term, persistent nature for espionage or strategic purposes, which is more specific than general financial exploitation.

D. This describes the behavior of malware like worms or viruses, which are designed for rapid, indiscriminate propagation, the opposite of a targeted and stealthy APT attack.

References

1. National Institute of Standards and Technology (NIST) Special Publication 800-39, Managing Information Security Risk: Organization, Mission, and Information System View, March 2011. In Appendix C, Glossary, an APT is defined as: "An adversary that possesses sophisticated levels of expertise and significant resources which allow it to create opportunities to achieve its objectives... These objectives typically include establishing and extending footholds within the information technology infrastructure of the targeted organizations for purposes of exfiltrating information..." (Page C-1).

2. Chen, P., Desmet, L., & Huygens, C. (2014). A Study on Advanced Persistent Threats. In Communications and Multimedia Security: 15th IFIP TC 6/TC 11 International Conference, CMS 2014. The paper states, "The primary goal of an APT is to maintain a long-term presence in the targetโ€™s network, and to exfiltrate valuable data in a stealthy way." (Section 2.1, Paragraph 1). DOI: https://doi.org/10.1007/978-3-662-44885-45

3. MITRE. ATT&CK for Enterprise, Groups. MITRE's documentation on APT groups (e.g., APT28, APT29) consistently describes their objectives in terms of long-term intelligence collection, espionage, and data theft from government, military, and other high-value targets, reinforcing the goal of persistent access for data exfiltration. (e.g., see descriptions for groups G0007 and G0016).

Question 23

Defense in depth is a strategy that โ€ฆ:

Options
A:

relies on a single layer of security measures for protection B

B:

that focuses on incident response rather than prevention

C:

that employs multiple layers of security measures for comprehensive protection

D:

emphasizes physical security over digital security

Show Answer
Correct Answer:
that employs multiple layers of security measures for comprehensive protection
Explanation
Defense in depth is a foundational cybersecurity strategy that involves implementing multiple, redundant layers of security controls throughout an organization's technology infrastructure. The core principle is that if one security measure fails or is bypassed by an attacker, other layers are in place to detect, prevent, or slow down the attack, thereby protecting critical assets. This layered approach provides comprehensive protection by integrating administrative, technical, and physical security controls, ensuring there is no single point of failure.
Why Incorrect Options are Wrong

A: This is the opposite of defense in depth. Relying on a single layer creates a single point of failure, which this strategy is designed to avoid.

B: Defense in depth is primarily a preventative strategy. While it aids incident response by slowing attackers, its main focus is on preventing breaches from occurring in the first place.

D: This strategy is holistic, applying to physical, technical (digital), and administrative controls. It does not prioritize one domain over another but rather integrates them for robust security.

References

1. National Institute of Standards and Technology (NIST). (2020). Security and Privacy Controls for Information Systems and Organizations (NIST Special Publication 800-53, Revision 5). Page 411, Appendix F, "Security and Privacy Control Baselines". The document defines defense-in-depth as an "Information security strategy that integrates people, technology, and operations capabilities to establish variable barriers across multiple layers and dimensions of the organization."

2. National Institute of Standards and Technology (NIST). (2018). Systems Security Engineering: Considerations for a Multidisciplinary Approach in the Engineering of Trustworthy Secure Systems (NIST Special Publication 800-160, Volume 1). Page 43, Section 3.2.3, "Security Design Principles". It states, "Defense in depth: Layering of security mechanisms is intended to provide redundancy in the event a security mechanism fails or is circumvented."

3. Ross, R., McEvilley, M., & Oren, J. (2018). Cybersecurity Framework Profile for High Value Assets (HVAs) (NIST Interagency Report 8170). Page 10, Section 3.1. The report describes the strategy: "Defense-in-depth is the application of multiple countermeasures in a layered or tiered approach to security."

4. Perrig, A., & Tygar, J. D. (2017). CS 161: Computer Security, Lecture 1: Introduction. University of California, Berkeley. Slide 43. The course material explains the concept: "Defense in Depth: The idea is to layer defenses, so that a failure in one defense is not a total failure."

Question 24

178/326 In the context of the risk management process, what does the term 'residual risk' refer to?
Options
A: The risk that remains after all possible controls and countermeasures have been applied
B: The total elimination of risk within an organization
C: The risks that are considered irrelevant or insignificant
D: The risk associated with an organization's assets before any controls are implemented
Show Answer
Correct Answer:
The risk that remains after all possible controls and countermeasures have been applied
Explanation
Residual risk is a fundamental concept in risk management, representing the risk that persists after an organization has implemented security controls and countermeasures to mitigate identified threats. The process begins with identifying inherent risk (the risk level without any controls). Controls are then applied to reduce this risk. The remaining, unmitigated risk is the residual risk. Organizations must then decide whether this level of risk is acceptable (risk acceptance) or if further action is required. The goal is not to eliminate all risk, which is typically impossible, but to reduce it to a level that aligns with the organization's risk appetite.
Why Incorrect Options are Wrong

B. Total risk elimination is an ideal state, often referred to as 'zero risk,' which is practically unattainable in most complex systems.

C. This describes a category of risks that an organization might choose to accept, but it is not the definition of residual risk itself.

D. This is the definition of 'inherent risk' or 'gross risk,' which is the level of risk present before any controls are implemented.

References

1. National Institute of Standards and Technology (NIST). (2012). Guide for Conducting Risk Assessments (NIST Special Publication 800-30, Rev. 1).

Section 2.2.5, Page 11: "Residual risk is the risk remaining after risk responses have been implemented." This document further clarifies that inherent risk is the risk before controls are applied, directly contrasting with residual risk.

2. International Organization for Standardization (ISO). (2022). ISO/IEC 27005:2022 Information security, cybersecurity and privacy protection โ€” Guidance on managing information security risks.

Clause 3.10: Defines residual risk as the "risk remaining after risk treatment." Risk treatment is the process of selecting and implementing controls.

3. Bauer, L. (2019). Introduction to Information Security (Course 18-730). Carnegie Mellon University.

Lecture 2: Risk Management, Slide 15: Defines residual risk as the "risk that remains after we implement controls," and inherent risk as the "risk that exists before we implement any controls." This clearly distinguishes between the concepts in options A and D.

Question 25

Which of the following controls safeguards an organization during a blackout power outage?
Options
A: Redundant servers
B: RAID
C: Uninterruptible power supply (UPS)
D: Generator
Show Answer
Correct Answer:
Uninterruptible power supply (UPS)
Explanation
An Uninterruptible Power Supply (UPS) is a physical control designed to provide immediate, short-term battery power to critical equipment during a power outage (blackout). Its primary function is to prevent data loss or hardware damage from an abrupt loss of power. This allows sufficient time for systems to be shut down gracefully or for a secondary, long-term power source, such as a generator, to activate and take over the electrical load. The UPS is the essential first line of defense against the instantaneous effects of a blackout.
Why Incorrect Options are Wrong

A. Redundant servers: This provides availability against individual server hardware failure, not a facility-wide power outage. All servers would lose power simultaneously.

B. RAID: Redundant Array of Independent Disks (RAID) is a data storage virtualization technology that protects against hard disk failures, not power failures.

D. Generator: A generator provides long-term power but requires a startup period. It cannot supply the instantaneous power needed to prevent systems from shutting down during a blackout.

References

1. National Institute of Standards and Technology (NIST). (2020). Security and Privacy Controls for Information Systems and Organizations (Special Publication 800-53, Rev. 5). U.S. Department of Commerce. In control PE-11 (Alternate Power Supply), the discussion notes that a short-term power supply, such as a UPS, "can be used to facilitate an orderly shutdown of the system... or transition to a long-term alternate power supply, such as a generator."

2. Patterson, D. A., & Hennessy, J. L. (2014). Computer Organization and Design: The Hardware/Software Interface (5th ed.). Morgan Kaufmann. In Section 6.7, "Dependability, Reliability, and Availability," the text discusses challenges to dependability, including power failures. It explains that data centers use UPS systems to bridge the gap between a power failure and the startup of a generator.

3. Kumar, D., & Singh, S. P. (2016). Reliability analysis of uninterruptible power supply (UPS) systems for data centers. 2016 4th International Conference on Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 1-5. This paper states, "In case of utility failure, the UPS system provides backup power from the batteries until the standby generator is started and synchronized." (p. 1). https://doi.org/10.1109/ICRITO.2016.7784954

Question 26

During which phase of the incident response process would be most appropriate to implement long-term fixes to prevent similar incidents in the future?
Options
A: Analysis
B: Mitigation
C: Detection
D: Recovery
Show Answer
Correct Answer:
Recovery
Explanation
The Recovery phase is the most appropriate time to implement long-term fixes. According to the National Institute of Standards and Technology (NIST), this phase involves restoring systems to normal operation after an incident has been contained and eradicated. A critical part of this process is implementing improvements to prevent the recurrence of the incident. This includes applying patches, hardening systems by changing configurations, and implementing other security controls. These actions serve as the long-term fixes to prevent similar incidents from happening in the future on the recovered systems.
Why Incorrect Options are Wrong

A. Analysis: This phase is focused on confirming an incident, determining its scope, and understanding its impact, not on implementing corrective measures.

B. Mitigation: This phase, more commonly known as Containment and Eradication, focuses on immediate actions to stop the incident and remove the threat, not on long-term strategic prevention.

C. Detection: This is the initial phase where a potential incident is identified. No fixes are implemented at this stage.

References

1. Cichonski, P., Millar, T., Grance, T., & Scarfone, K. (2012). Computer Security Incident Handling Guide (NIST Special Publication 800-61 Rev. 2). National Institute of Standards and Technology. Section 3.3.3, "Recovery," p. 28. The document states, "During recovery, organizations should take the opportunity to implement improvements that can prevent the recurrence of the incident."

2. Carnegie Mellon University, Software Engineering Institute. (2017). Defining the Building Blocks of a CSIRT. Section "Reactive Services," p. 11. This document outlines the incident response process, where the recovery step includes actions to "restore the affected systems and services" and "implement measures to prevent the incident from recurring."

3. Kim, D., & Solomon, M. G. (2016). Fundamentals of Information Systems Security (3rd ed.). Jones & Bartlett Learning. Chapter 11, "Security Operations," p. 429. The text describes the recovery phase as the point to "rebuild systems and restore data... This is also the time to apply any new security controls or patches."

Question 27

Which attacks involve an attacker using a list of pre-computed hashes to find a matching hash value for a user's password?
Options
A: Rainbow Table Attack
B: Spoofing Attack
C: Brute Force Attack
D: Dictionary Attack
Show Answer
Correct Answer:
Rainbow Table Attack
Explanation
A rainbow table attack is a cryptanalytic method that uses pre-computed tables of hash values to crack password hashes. An attacker generates a large set of potential passwords and their corresponding hashes, which are then stored in a specialized, space-efficient lookup table called a rainbow table. When the attacker obtains a target hash, they can search this pre-computed table to find a match and thereby discover the original plaintext password. This technique offers a time-memory trade-off, making it significantly faster than brute-forcing passwords at the time of the attack.
Why Incorrect Options are Wrong

B. Spoofing Attack: This is an impersonation attack where an attacker falsifies data to masquerade as another entity; it is not a password cracking technique.

C. Brute Force Attack: This attack method involves systematically trying every possible password combination in real-time, rather than using a pre-computed list of hashes.

D. Dictionary Attack: This attack uses a pre-defined list of words (a dictionary) as password guesses, hashing each one individually, not using a pre-computed hash table.

References

1. Oechslin, P. (2003). Making a Faster Cryptanalytic Time-Memory Trade-Off. In: Boneh, D. (eds) Advances in Cryptology - CRYPTO 2003. Lecture Notes in Computer Science, vol 2729. Springer, Berlin, Heidelberg. Section 3, "Rainbow Tables," describes the method of pre-computing chains of hashes to create the tables for password cracking. (DOI: https://doi.org/10.1007/978-3-540-45146-436)

2. NIST Special Publication 800-63B, Digital Identity Guidelines: Authentication and Lifecycle Management. (June 2017). Section 5.1.1.2, "Memorized Secret Verifiers," states that password salts are used to "mitigate the impact of a password hash disclosure by making it more difficult for an attacker to use pre-computed tables of hashes (e.g., 'rainbow tables') to crack the passwords."

3. Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in Computing (5th ed.). Prentice Hall. Chapter 5.3, "Passwords," discusses various password cracking methods, explaining that rainbow tables are precomputed to reverse hashes.

4. MIT OpenCourseWare. (2014). 6.858 Computer Systems Security, Fall 2014. Lecture 10: Web Security. The lecture notes differentiate password cracking techniques, describing pre-computation attacks like rainbow tables as distinct from online guessing, dictionary attacks, and brute-force attacks. (Available at: https://ocw.mit.edu/courses/6-858-computer-systems-security-fall-2014/resources/mit6858f14lec10/)

Question 28

Which of the following is NOT a recommended practice for password protection according to the security awareness training examples?
Options
A: Reusing passwords for multiple systems
B: Using different passwords for different systems
C: Using a password management solution
D: Avoiding the sharing of passwords with co-workers
Show Answer
Correct Answer:
Reusing passwords for multiple systems
Explanation
Security awareness guidance (e.g., NIST SP 800-63B and SP 800-118) states that every account should have a unique secret; re-using the same password on multiple systems greatly enlarges the blast radius of any compromise and is therefore specifically discouraged. Recommended practices instead include using distinct passwords, employing vetted password-management software to generate/store them safely, and never disclosing credentials to others. Hence โ€œReusing passwords for multiple systemsโ€ is the only option that is not a recommended practice.
Why Incorrect Options are Wrong

B. Guidance explicitly advises unique passwords per account; this is recommended, not discouraged.

C. NIST and university courses advocate password managers to help users maintain strong, unique passwords.

D. Credential sharing violates least-privilege principles; training materials stress keeping passwords confidential.

References

1. NIST Special Publication 800-63B, โ€œDigital Identity Guidelines: Authentication,โ€ ยง5.1.1.2, p. 21 โ€“ 22.

2. NIST Special Publication 800-118, โ€œGuide to Enterprise Password Management,โ€ ยง3.3, p. 3-7.

3. Microsoft Security Documentation, โ€œCreate and manage strong passwords,โ€ ID MS-SEC-PW-2023, para. 2-3.

4. Carnegie Mellon Univ. (SEI) 18-731 Course Notes, โ€œUser Authentication,โ€ Lecture 4, slides 15-17.

Question 29

To which OSI layer does a MAC address belong to?
Options
A: The Session layer
B: The Data Link layer
C: The Application layer
D: The Physical layer
Show Answer
Correct Answer:
The Data Link layer
Explanation
A Media Access Control (MAC) address is defined in IEEE 802 LAN standards as the โ€œMedia Access Control (MAC) sublayer address.โ€ The MAC sub-layer is one of the two sublayers of the OSI Data Link layer (Layer 2). Its responsibilities include physical addressing, frame delimiting, and access control over the shared medium. Because a MAC address is used for node-to-node delivery within the same local network segment and is inserted into the Layer 2 frame header/trailer, it is unequivocally associated with the Data Link layer rather than the Session, Application, or Physical layers.
Why Incorrect Options are Wrong

A. Session layer manages dialog control and synchronization; it does not define hardware addresses.

C. Application layer provides end-user services (HTTP, SMTP, etc.); it has no concept of hardware addressing.

D. Physical layer transmits raw bits and signalling; it contains no addressing fields.

References

1. IEEE Std 802-2022, Clause 3.1, โ€œMedia Access Control (MAC) sublayerโ€ (Data Link Layer).

2. Tanenbaum, A. & Wetherall, D., Computer Networks, 5th ed., Pearson, 2011, ยง2.4.2 โ€œThe Medium Access Control Sub-layerโ€ (pp. 145-147).

3. Cisco Networking Academy, CCNA v7: Introduction to Networks, Ch. 4 โ€œNetwork Access,โ€ Table 4-1 โ€œOSI Data Link Layer Functions.โ€

4. Kurose, J. & Ross, K., Computer Networking: A Top-Down Approach, 8th ed., Pearson, 2021, ยง1.3 โ€œLayers in the Internet,โ€ Focus on Link Layer Addressing (p. 84).

Question 30

How does encryption contribute to system hardening?
Options
A: By protecting data at rest and in transit from unauthorized access
B: By managing user permissions and access controls
C: By implementing strong password policies
D: By reducing the attack surface of the system
Show Answer
Correct Answer:
By protecting data at rest and in transit from unauthorized access
Explanation
System hardening is the process of securing a system by reducing its vulnerabilities. Encryption is a fundamental control used in this process to ensure data confidentiality. It works by converting data into a coded format (ciphertext), rendering it unreadable to unauthorized parties. This protection is applied to data when it is stored on media like hard drives (data at rest) and when it is being communicated across a network (data in transit). By protecting the data itself, encryption adds a critical layer of defense, ensuring that even if an attacker bypasses other controls, the information remains secure.
Why Incorrect Options are Wrong

B. By managing user permissions and access controls

Incorrect. This is the function of Identity and Access Management (IAM) and Access Control Lists (ACLs), which define who can access resources, not encryption.

C. By implementing strong password policies

Incorrect. Password policies (e.g., length, complexity) are administrative and technical controls configured in the operating system or applications, not a direct function of encryption.

D. By reducing the attack surface of the system

Incorrect. Reducing the attack surface involves actions like disabling unused services, closing unnecessary ports, and removing unneeded software, not encrypting data.

References

1. National Institute of Standards and Technology (NIST). (2008). Special Publication 800-123: Guide to General Server Security. Section 3.3, "Protect Data," states, "Organizations should also consider encrypting sensitive data, both when it is stored on the server (at rest) and when it is transmitted from the server to a client (in transit)."

2. National Institute of Standards and Technology (NIST). (2020). Special Publication 800-53 Revision 5: Security and Privacy Controls for Information Systems and Organizations. Control SC-28, "Protection of Information at Rest," specifies the requirement to protect the confidentiality of information on storage devices, with encryption being a primary implementation method.

3. Saltzer, J. H., & Schroeder, M. D. (1975). The Protection of Information in Computer Systems. Proceedings of the IEEE, 63(9), 1278โ€“1308. This foundational paper discusses defense in depth, where encryption serves as a crucial mechanism for protecting information confidentiality, distinct from access control mechanisms. (DOI: https://doi.org/10.1109/PROC.1975.9939)

Question 31

Which of the following cloud models puts MOST responsibility on the cloud provider?
Options
A: PaaS
B: On-premises
C: SaaS
D: IaaS
Show Answer
Correct Answer:
SaaS
Explanation
In the Software as a Service (SaaS) model, the cloud service provider (CSP) manages the entire technology stack. This includes the physical infrastructure (servers, storage, networking), the platform (operating systems, middleware), and the application software itself. The consumer's responsibility is typically limited to managing their data and user access within the application. This model represents the highest level of abstraction and offloads the maximum operational and management responsibility to the cloud provider when compared to Infrastructure as a Service (IaaS) and Platform as a Service (PaaS).
Why Incorrect Options are Wrong

A. PaaS: The provider manages the platform and infrastructure, but the customer is still responsible for developing, managing, and securing their own applications and data.

B. On-premises: This is not a cloud model. The organization owns and manages the entire infrastructure and software stack, bearing all responsibility.

D. IaaS: The provider only manages the core physical infrastructure. The customer is responsible for the operating system, middleware, data, and applications.

References

1. Mell, P., & Grance, T. (2011). The NIST Definition of Cloud Computing (NIST Special Publication 800-145). National Institute of Standards and Technology. In Section 2, the definition of SaaS states, "The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities..." This highlights the extensive responsibility of the provider. https://doi.org/10.6028/NIST.SP.800-145

2. University of California, Berkeley, Information Security Office. (n.d.). Cloud Computing Services and the Shared Responsibility Model. This resource provides a clear diagram and explanation showing that in the SaaS model, the vendor is responsible for the applications, data, runtime, middleware, O/S, virtualization, servers, storage, and networking, leaving the least responsibility for the customer. Retrieved from https://security.berkeley.edu/education-awareness/cloud-computing-services-and-shared-responsibility-model

3. Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., ... & Zaharia, M. (2010). A view of cloud computing. Communications of the ACM, 53(4), 50-58. Section 2.1, "Classes of Utility Computing," describes SaaS as delivering a complete application, thereby abstracting away the most complexity and management responsibility from the end-user. https://doi.org/10.1145/1721654.1721672

Question 32

What access control problems arise if during an audit it is found that an IT manager retains permission access to shared folders from his previous company roles?
Options
A: Privilege creep
B: Unauthorized access
C: Excessive provisioning
D: Account review
Show Answer
Correct Answer:
Privilege creep
Explanation
Privilege creep, also known as access creep or privilege accumulation, is the specific term for the security problem where a user gradually accumulates access permissions beyond what is required for their current job. This typically occurs when an employee changes roles within an organization, and their old permissions are not revoked. The scenario describes this exact situation: an IT manager has retained access from previous roles, leading to an excessive and unnecessary collection of privileges that violates the principle of least privilege. This accumulation creates a significant security risk, as the account has a larger-than-necessary attack surface.
Why Incorrect Options are Wrong

B. Unauthorized access: This is a potential result of privilege creep if the manager uses the old permissions, but it is not the name of the underlying access control problem itself.

C. Excessive provisioning: This refers to granting a user more permissions than necessary when an account is initially created or a role is assigned, not the accumulation over time through role changes.

D. Account review: This is a security control or administrative process used to detect problems like privilege creep; it is the solution or countermeasure, not the problem.

References

1. National Institute of Standards and Technology (NIST). (2020). Security and Privacy Controls for Information Systems and Organizations (NIST Special Publication 800-53, Revision 5). Section on AC-2 Account Management, specifically the discussion for control enhancement AC-2 (3) "Access Reviews," emphasizes the need for periodic reviews to "reduce the risk of unauthorized access that could be caused by users accumulating privileges over time (i.e., privilege creep)."

2. De Clercq, J., & Venter, H. S. (2019). Managing and monitoring access rights: are we on the right track? Journal of Information Security and Applications, 46, 113-125. In Section 2.1, the paper defines the problem: "Privilege creep is a phenomenon where users accumulate privileges over time as their job roles and responsibilities change... This happens when access rights from a previous role are not revoked when a user moves to a new role." (https://doi.org/10.1016/j.jisa.2019.02.006)

3. Massachusetts Institute of Technology (MIT). (2014). 6.858 Computer Systems Security, Fall 2014. MIT OpenCourseWare. Lecture 15: Access Control. The lecture discusses the Principle of Least Privilege, which is the fundamental concept violated by privilege creep. The challenge of dynamically managing permissions as roles change is a core aspect of maintaining this principle in practice.

Question 33

Which aspect ensures that authorized users have timely and reliable access to information and resources?
Options
A: Authentication
B: Confidentiality
C: Integrity
D: Availability
Show Answer
Correct Answer:
Availability
Explanation
Availability is a core principle of information security, often cited as part of the CIA (Confidentiality, Integrity, Availability) triad. It ensures that information systems, resources, and data are operational and accessible to authorized users when they are needed. The question's phrasing, "timely and reliable access to information and resources" for "authorized users," is the standard definition of availability. This principle is upheld through measures like redundancy, fault tolerance, and disaster recovery planning to counter threats such as denial-of-service attacks, hardware failures, and natural disasters.
Why Incorrect Options are Wrong

A. Authentication: This is the process of verifying a user's identity. While necessary for secure access, it does not guarantee the system itself is available.

B. Confidentiality: This principle focuses on preventing the unauthorized disclosure of information, which is contrary to ensuring access for authorized users.

C. Integrity: This ensures that data is accurate and has not been subject to unauthorized modification; it pertains to the trustworthiness of data, not its accessibility.

References

1. National Institute of Standards and Technology (NIST), FIPS Publication 199, "Standards for Security Categorization of Federal Information and Information Systems," February 2004.

Page 2, Section 2.2, "Security Objectives": Defines the three security objectives. For Availability, it states: "The loss of availability is the disruption of access to or use of information or an information system." The objective is explicitly defined as "Ensuring timely and reliable access to and use of information."

2. Purdue University, The Center for Education and Research in Information Assurance and Security (CERIAS), "Introduction to Information Security," Courseware.

In foundational materials on the CIA triad, Availability is defined as "the property of a system or a system resource being accessible and usable upon demand by an authorized entity." This directly aligns with the question.

3. Saltzer, J. H., & Schroeder, M. D. (1975). "The protection of information in computer systems." Proceedings of the IEEE, 63(9), 1278-1308.

Section I.A.3, Page 1279: This foundational academic paper on computer security defines availability as ensuring that "a system's services are available to its authorized users."

Question 34

Which of the following is a key component of the risk assessment process?
Options
A: Focusing solely on risks with minimal impact
B: Ignoring potential threats and vulnerabilities
C: Avoiding the use of risk assessment methodologies or frameworks
D: Identifying and evaluating potential risks based on their likelihood and impact
Show Answer
Correct Answer:
Identifying and evaluating potential risks based on their likelihood and impact
Explanation
The core of any risk assessment process is the systematic identification of potential risks and their subsequent evaluation. This evaluation is fundamentally based on two key dimensions: the likelihood (or probability) of a risk event occurring and the potential impact (or consequence) on the organization if it does. By analyzing these two factors, organizations can quantify or qualify the level of risk, which is essential for prioritizing which risks to address first. This process allows for informed decision-making in the subsequent risk treatment phase.
Why Incorrect Options are Wrong

A. Focusing solely on risks with minimal impact is counterproductive; risk management prioritizes addressing the most significant risks first.

B. Ignoring potential threats and vulnerabilities is the opposite of risk assessment, which is predicated on their identification and analysis.

C. Avoiding methodologies leads to inconsistent, incomplete, and non-repeatable assessments, which undermines the entire risk management process.

References

1. National Institute of Standards and Technology (NIST) Special Publication (SP) 800-30 Revision 1, Guide for Conducting Risk Assessments. Section 2.2, "Risk Assessment Process," states, "The purpose of the risk assessment process is to identify, estimate, and prioritize risks... The risk assessment process consists of the following steps: ... (iii) determine the likelihood of occurrence... (iv) determine the impact of the threat event... and (v) determine risk..." This directly supports that risk is evaluated based on likelihood and impact.

2. ISO/IEC 27005:2022, Information security, cybersecurity and privacy protection โ€” Guidance on managing information security risks. Clause 8.3, "Information security risk analysis," specifies the process involves determining "potential consequences" (impact) and assessing the "likelihood of the occurrence of scenarios." The combination of these elements is used to determine the level of risk.

3. Purdue University, Introduction to Cybersecurity (CYBR 26000) Courseware. In modules covering Risk Management, risk is consistently defined as a function of threats, vulnerabilities, and impacts. The assessment phase is described as the process of identifying these elements and calculating risk, often expressed as Risk = Likelihood ร— Impact.

Question 35

Which type of token-based authentication generates codes at fixed intervals without a server challenge?
Options
A: RFID
B: Asynchronous
C: Smart card
D: Synchronous
Show Answer
Correct Answer:
Synchronous
Explanation
Synchronous token-based authentication generates a one-time password (OTP) based on a factor that is synchronized between the token and the authentication server. The most common implementation is the Time-based One-Time Password (TOTP) algorithm, where the shared factor is the current time. The token's internal clock is synchronized with the server's clock, and a new code is generated at predetermined, fixed intervals (e.g., every 30 or 60 seconds). The user enters the currently displayed code to authenticate, which does not require a challenge from the server.
Why Incorrect Options are Wrong

A. RFID: Radio-Frequency Identification (RFID) is a proximity-based technology used for identification and tracking, not for generating user-facing codes at fixed intervals for authentication.

B. Asynchronous: Asynchronous tokens operate on a challenge-response basis. The server issues a unique challenge (e.g., a random number), and the token generates a response based on that specific challenge.

C. Smart card: A smart card is a physical device form factor that contains a microprocessor. While it can be used for various authentication methods, it is not itself a method of code generation.

References

1. National Institute of Standards and Technology (NIST). (2017). Special Publication 800-63B: Digital Identity Guidelines, Authentication and Lifecycle Management. Section 5.1.4, "One-Time Passwords (OTPs)". This section describes TOTP as a synchronous OTP where the "moving factor is time," contrasting it with challenge-response systems.

2. M'Raihi, D., Bellare, M., Hoornaert, F., Naccache, D., & Ranen, O. (2011). RFC 6238: TOTP: Time-Based One-Time Password Algorithm. Internet Engineering Task Force (IETF). The abstract and introduction describe TOTP as an algorithm that "computes a one-time password from a shared secret key and the current time," which inherently operates on fixed time-step intervals.

3. Halevi, S., & Krawczyk, H. (2008). Public-Key Cryptography โ€“ PKC 2008. Lecture Notes in Computer Science, vol 4939. Springer, Berlin, Heidelberg. In the chapter "Strengthening Digital Signatures via Randomized Hashing," the principles of time-synchronous authentication are discussed as a method distinct from challenge-response mechanisms. (DOI: https://doi.org/10.1007/978-3-540-78440-127, pp. 455-472).

Shopping Cart
Scroll to Top

FLASH OFFER

Days
Hours
Minutes
Seconds

avail $6 DISCOUNT on YOUR PURCHASE