Get ready for your CS0-003 exam with our free, accurate, and 2025-updated questions.
Cert Empire is committed to providing the best and latest exam questions for those preparing for the CompTIA CS0-003 exam. To assist students, we’ve made some of our CS0-003 exam prep resources free. You can get plenty of practice with our Free CS0-003 Practice Test.
Question 1
Show Answer
A. This describes application security testing (AST) methodologies like SAST or DAST. While ATT&CK can inform such tests, it is not a testing method itself.
B. This is a better description of an Information Sharing and Analysis Center (ISAC) or a threat intelligence platform (TIP), which focus on the sharing and dissemination of intelligence.
C. This describes a primary use case or outcome of applying the ATT&CK framework, rather than describing the fundamental nature of the framework itself, which is a knowledge base.
E. This accurately describes the Lockheed Martin Cyber Kill Chainยฎ, which models an intrusion as a linear sequence of phases, unlike the ATT&CK matrix, which is non-sequential.
1. The MITRE Corporation. (2023). About ATT&CK. MITRE ATT&CKยฎ. Retrieved from https://attack.mitre.org/resources/getting-started/. In the "What is ATT&CK?" section, it is defined as "a globally-accessible knowledge base of adversary tactics and techniques based on real-world observations." This supports the "tracks and understands threats" aspect. The community-driven and evolving nature is also a central theme.
2. NIST. (2021). Special Publication 800-160, Volume 2, Revision 1: Developing Cyber-Resilient Systems: A Systems Security Engineering Approach. National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.800-160v2r1. In Appendix F, Section F.3, ATT&CK is described as a "curated knowledge base and model for cyber adversary behavior" used to "characterize and describe adversary behaviors." This aligns with the concept of a tool to track and understand threats.
3. Applebaum, A. (2020). A Survey of the MITRE ATT&CK Framework. SANS Institute Reading Room. Retrieved from https://www.sans.org/white-papers/39390/. On page 4, the paper states, "The ATT&CK framework is a knowledge base of adversary behavior and a model for describing the actions an adversary may take... It is a living, community-driven knowledge base that is continuously updated..." This directly supports the description of an evolving, open project for understanding threats.
Question 2
Show Answer
A. Law enforcement: Law enforcement is an external agency to be notified if a crime has occurred, not an internal entity that approves the organization's public communication process.
B. Governance: Governance provides the high-level framework and policies, but the specific, operational task of crafting and approving public statements falls to legal and PR teams.
D. Manager: This option is too vague. The incident manager is a manager who coordinates with other specific functional leads, such as the heads of legal and public relations.
F. Human resources: Human resources primarily handles internal communications and personnel-related matters, not external communications with the general public regarding a security incident.
1. National Institute of Standards and Technology (NIST). (2012). Special Publication 800-61 Rev. 2, Computer Security Incident Handling Guide.
Section 2.4.3, "Relationships with Other Groups," states, "The CSIRT should also have a close relationship with the organizationโs general counsel and public affairs offices. The general counsel can provide advice on legal issues... Public affairs can handle the media, which is particularly important during a high-profile incident." This directly supports the involvement of Legal (general counsel) and Public Relations (public affairs).
2. University of Washington. (2023). UW-IT Information Security and Privacy: Incident Response Plan.
Section "Incident Response Team," under the subsection for "External Communications," explicitly lists "University Marketing & Communications" (the public relations function) and the "Office of the Attorney General" (the legal function) as the primary entities responsible for coordinating and approving communications with the media and the public.
3. Solove, D. J., & Citron, D. K. (2017). Risk and Anxiety: A Theory of Data-Breach Harms. The George Washington University Law School Public Law and Legal Theory Paper No. 2017-10.
Section IV.B, "The Response to a Data Breach," discusses the institutional response, emphasizing that "companies often hire public relations firms to help them manage the crisis" and that legal counsel is central to navigating the complex web of state and federal notification laws. This academic source underscores the essential roles of both PR and legal teams. (Available via SSRN and university repositories).
Question 3
Show Answer
A. Irregular peer-to-peer communication: The evidence describes data access and log manipulation, not a specific network communication pattern like P2P file sharing.
B. Unauthorized privileges: The account is described as "privileged," meaning it already has high-level access. The issue is the abuse of existing privileges, not the acquisition of new, unauthorized ones.
C. Rogue devices on the network: The activity is tied to a user account, not an unauthorized piece of hardware. There is no information suggesting a new or unknown device is present.
1. National Institute of Standards and Technology (NIST). (2020). NIST Special Publication 800-53 Rev. 5: Security and Privacy Controls for Information Systems and Organizations.
Reference: Appendix F, Security Control Catalog, AU-11 (Audit Record Retention), discusses the importance of protecting audit logs from unauthorized modification. The scenario's "audit logs being modified" is a direct violation of this principle and a key indicator of an attempt to cover tracks, common in insider attacks.
2. Cappelli, D. M., Moore, A. P., & Trzeciak, R. F. (2012). The CERT Guide to Insider Threats: How to Prevent, Detect, and Respond to Information Technology Sabotage (Theft, Fraud). Addison-Wesley Professional.
Reference: Chapter 3, "A Closer Look at the Malicious Insider," details common indicators. It explicitly lists technical indicators such as "Abuse of privileges" and behavioral indicators like "Working odd hours without authorization," which directly correspond to the activities observed in the scenario.
3. Carnegie Mellon University, Software Engineering Institute. (2018). Common Sense Guide to Mitigating Insider Threats, Sixth Edition.
Reference: Page 15, Practice 4: "Monitor and respond to suspicious or disruptive behavior." This guide lists "unusual remote access" and "accessing sensitive information not associated with their job" as key indicators. The modification of logs is described as an attempt to "conceal their actions."
4. Zwicky, E. D., Cooper, S., & Chapman, D. B. (2000). Building Internet Firewalls, 2nd Edition. O'Reilly & Associates. (A foundational text often used in university curricula).
Reference: Chapter 26, "Responding to Security Incidents," describes patterns of intrusion. It notes that attackers, including insiders, often attempt to "cover their tracks" by altering logs and that unusual login times are a primary indicator of a compromised account or malicious insider activity.
Question 4
Show Answer
A. Multifactor authentication on the server OS protects logons to the host, not the web application code path exploited by the form.
B. Hashing passwords at rest limits post-compromise damage but does not stop an attacker from exploiting the form to read data before hashing occurs.
D. Network segmentation limits lateral movement; it does not address the direct flaw inside the application logic that allows credential extraction.
1. NIST Special Publication 800-53 Rev.5, โSystem and Information Integrity โ SI-10: Input Validation,โ pp. 413-414.
2. NIST Special Publication 800-115, โTechnical Guide to Information Security Testing and Assessment,โ ยง4.3.3 (Injection Attacks) โ recommends input validation to mitigate.
3. MIT OpenCourseWare, 6.858 โComputer Systems Securityโ Lecture 13: SQL Injection, slides 20-22 โ emphasizes sanitization/validation of user input as the primary fix.
4. Viega, J., & McGraw, G. (2019). โBuilding Secure Software,โ Addison-Wesley, Ch. 5, pp. 127-130 โ lists input validation as foundational for preventing credential-stealing injections.
Question 5
Show Answer
A. Perform OS hardening.
This is a system-level, not an application-level, mitigation. It strengthens the operating system but does not fix the underlying coding flaw in the application itself.
C. Update third-party dependencies.
This is only effective if the buffer overflow vulnerability exists within a third-party library the application uses, not in the application's own custom code.
D. Configure address space layout randomization.
Address Space Layout Randomization (ASLR) is an OS-level memory-protection feature that makes exploitation more difficult but does not prevent the buffer overflow from happening.
---
1. National Institute of Standards and Technology (NIST). (2020). Security and Privacy Controls for Information Systems and Organizations (Special Publication 800-53, Revision 5).
Reference: Control SI-10, "Information Input Validation."
Quote/Paraphrase: The documentation for this control explicitly states that input validation is used to protect against many threats, including "buffer overflows." It emphasizes checking input for validity against defined requirements before it is processed by the application.
2. Kaashoek, M. F., & Zeldovich, N. (2014). 6.858 Computer Systems Security, Fall 2014 Lecture Notes. MIT OpenCourseWare.
Reference: Lecture 2: "Control-flow attacks and defenses."
Quote/Paraphrase: The lecture notes discuss defenses against buffer overflows, highlighting the importance of "checking buffer bounds" before writing data. This bounds checking is a core component of input validation and is presented as a direct countermeasure to prevent the overflow from occurring at the source code level.
3. Dowd, M., McDonald, J., & Schuh, J. (2006). The Art of Software Security Assessment: Identifying and Preventing Software Vulnerabilities. Addison-Wesley Professional.
Reference: Chapter 5, "Memory Corruption."
Quote/Paraphrase: This foundational academic text on software security explains that the fundamental cause of buffer overflows is a lack of input validation and bounds checking. It details how validating the size of incoming data is a primary preventative measure that must be implemented by developers at the application level.
Question 6
Show Answer
A. This is too broad. While creating a playbook is useful, it doesn't address the root cause of where the reporting SLAs originate, and it incorrectly bundles containment with the reporting issue.
C. This action only defines which incidents require reporting, but the question's scenario already implies reporting was needed. It fails to address the specific problems of "who" and "when."
D. This addresses the "who" (roles) but completely ignores the "timing requirements," which was an equally critical part of the identified problem. Assigning a role without defining the deadline is an incomplete solution.
1. National Institute of Standards and Technology (NIST). (2012). Special Publication 800-61 Rev. 2, Computer Security Incident Handling Guide. Section 2.3.2, "Incident Response Policies," states that policy should define external reporting requirements to entities like government agencies and regulatory bodies. This necessitates researching those specific requirements to create a compliant policy.
2. ENISA (European Union Agency for Cybersecurity). (2022). Good practice guide on breach reporting. Section 4, "The notification process," details the legal timelines for reporting under regulations like the GDPR (e.g., "without undue delay and, where feasible, not later than 72 hours after having become aware of it"). This shows that reporting SLAs are derived directly from regulatory compliance research.
3. Romanosky, S. (2016). Examining the costs and causes of cyber incidents. Journal of Cybersecurity, 2(2), 121โ135. https://doi.org/10.1093/cybsec/tyw001. This academic journal discusses how incident response is heavily influenced by regulatory environments, stating, "state and federal laws require firms to notify individuals and government agencies of a breach," which reinforces the need to research these laws to define response procedures.
Question 7
Show Answer
A. A diagram of all systems and interdependent applications is a technical artifact for recovery but lacks the business-driven prioritization that guides the BCP.
B. A repository for all the software used by the organization is an element of disaster recovery, not the strategic input for creating the BCP.
D. A configuration management database (CMDB) provides technical details but does not define the business criticality or recovery priority of systems.
1. National Institute of Standards and Technology (NIST). (2010). Special Publication 800-34 Rev. 1, Contingency Planning Guide for Federal Information Systems. Section 2.2, Business Impact Analysis (BIA), p. 11. "The BIA is a key step in the contingency planning process... The BIA helps to identify and prioritize information systems and components critical to supporting the organizationโs mission/business processes."
2. International Organization for Standardization. (2019). ISO 22301:2019 Security and resilience โ Business continuity management systems โ Requirements. Clause 8.2.2, "Business impact analysis and risk assessment." The standard mandates that an organization shall "identify the processes that support its products and services and the impact that a disruption can have on them" and "determine the priorities for the resumption of products and services and processes."
3. Carnegie Mellon University, Software Engineering Institute. (2016). CERT Resilience Management Model, Version 1.2 (CMU/SEI-2016-TR-010). Service Continuity (SVC) Process Area, SG 2, "Prepare for Service Continuity," SP 2.1, p. 137. This specific practice involves identifying and prioritizing "essential functions and assets" to ensure their continuity.
Question 8
Show Answer
A. tiki: The finding for TikiWiki is a missing .htaccess file, which is a medium-risk configuration issue but less urgent than a potentially executable vulnerability.
B. phpList: The scan only identifies the presence of the application and its admin directory, which is a low-risk information disclosure finding.
D. sshome: The scan merely reports the installation of Sshome without noting any specific vulnerabilities, making it the lowest priority among the choices.
1. OWASP Web Security Testing Guide (WSTG) v4.2, Section 4.8.3 "Test for CGI Vulnerabilities (OTG-CONFIG-006)": This guide details the security risks associated with CGI. It states, "The cgi-bin directory is a special directory in the root of the web server that is used to house scripts that are to be executed by the web server... Misconfigured or legacy scripts could be abused by an attacker to gain control of the web server." The presence of shtml.exe directly aligns with this high-risk scenario.
2. NIST Special Publication 800-115, "Technical Guide to Information Security Testing and Assessment," Section 5.5.2, "Web Application Scanning": This document outlines the process of security testing, which includes analyzing scanner output. The methodology implicitly requires prioritizing findings based on potential impact. A vulnerability that could lead to code execution (like a flawed CGI executable) would be ranked higher than information disclosure or missing security headers.
3. Carnegie Mellon University, Software Engineering Institute (SEI), "Vulnerability Analysis," Courseware Module: University-level cybersecurity courseware emphasizes the principle of prioritizing vulnerabilities based on exploitability and impact. A server-side executable script in a cgi-bin directory presents a direct vector for server compromise, making it a critical finding that requires immediate attention over less severe configuration issues.
Question 9
Show Answer
B. This suggests human error, but the question asks for the most likely scenario occurring with the time stamps, implying a technical cause for the discrepancy itself.
C. A difference between UTC and a local time zone would result in an offset of one or more full hours (or half-hours), not an arbitrary value like 43 minutes.
D. A host being offline would mean it stops sending logs, but it does not explain why the timestamps in the logs it already sent are out of sync.
---
1. National Institute of Standards and Technology (NIST). (2006). Guide to Computer Security Log Management (Special Publication 800-92).
Section 4.3.1, Time Stamps, Page 4-4: "If the clocks on hosts are not synchronized, it is impossible to have a consistent time reference... The Network Time Protocol (NTP) is typically used to perform time synchronization. Without proper time synchronization, it is impossible to determine the order in which events occurred from their log entries." This directly supports that a lack of synchronization (via NTP) causes time reference issues, which is the root of the problem in the scenario.
2. Zeltser, L. (2012). SANS Institute InfoSec Reading Room: Critical Log Review Checklist for Security Incidents.
Section: Time Synchronization, Page 3: "Confirm that all systems involved in the incident had their time synchronized to a common time source. If time was not synchronized, determine the time offset for each system." This highlights time synchronization as a critical first step in incident analysis and triage, reinforcing that its absence is a common and significant problem.
3. CompTIA. (2022). CompTIA Cybersecurity Analyst (CySA+) CS0-003 Exam Objectives.
Section 2.3, Page 10: This objective requires the candidate to "analyze data as part of security monitoring activities," specifically mentioning "Log - Timestamps." The scenario directly tests the analyst's ability to interpret and troubleshoot issues with log timestamps, a core competency for the exam. The discrepancy points to a failure in the underlying mechanism (NTP) responsible for maintaining accurate timestamps.
Question 10
Show Answer
A. Implementing credentialed scanning: While credentialed scanning provides more accurate data for the vulnerability report, it does not solve the underlying problem of different teams having inconsistent views of the asset inventory itself.
B. Changing from a passive to an active scanning approach: This changes the data collection method but does not address the foundational need for an agreed-upon asset inventory, which is the source of the inter-team inconsistencies.
D. Performing agentless scanning: This is a deployment choice for how scans are conducted. It does not inherently solve the problem of inconsistent asset information between different organizational teams.
---
1. National Institute of Standards and Technology (NIST). (2020). Security and Privacy Controls for Information Systems and Organizations (Special Publication 800-53, Revision 5).
Section: Control CM-8, Information System Component Inventory.
Content: This control mandates the development and maintenance of an inventory of system components. The discussion section states, "The inventory of information system components is essential for many other security controls, such as... flaw remediation (SI-2)... An accurate and up-to-date inventory is a prerequisite for an effective security program." This highlights that a central inventory is foundational for vulnerability management.
2. Fling, R., & Schmidt, D. C. (2009). An Integrated Framework for IT Asset and Security Configuration Management. Proceedings of the 42nd Hawaii International Conference on System Sciences.
Section: 3. An Integrated Framework for IT Asset and Security Configuration Management.
DOI: https://doi.org/10.1109/HICSS.2009.105
Content: The paper argues that effective security management is impossible without accurate asset management. It states, "Without an accurate and up-to-date inventory of IT assets, it is impossible to effectively manage their security configurations... Discrepancies between discovered and recorded information can then be identified and reconciled." This directly supports using a central asset repository to resolve inconsistencies.
3. Kim, D., & Solomon, M. G. (2021). CompTIA CySA+ Cybersecurity Analyst Certification All-in-One Exam Guide, Second Edition (Exam CS0-002). McGraw-Hill. (Note: While a commercial book, its principles are derived from and align with official CompTIA objectives and are widely used in academic settings as courseware. The principle is directly applicable to CS0-003).
Chapter 3: Vulnerability Management.
Content: The text emphasizes that the vulnerability management lifecycle begins with asset inventory. It explains that knowing what assets exist on the network is a prerequisite for scanning them and managing their vulnerabilities effectively. This establishes the central asset inventory as the starting point for reducing discrepancies.
Question 11
Show Answer
A. Appropriate logging levels affect quantity/quality of data, not mis-aligned timestamps that break correlation.
C. Behavioral correlation settings depend on already-aligned events; wrong settings produce false negatives/positives, not timestamp mismatches.
D. Normalization converts diverse log formats into a common schema; it does not fix clock drift that prevents temporal matching.
1. Splunk Enterprise Admin Manual, โConfigure NTP on all forwarders and indexersโ, v9.1, p.47 (โTime synchronization is prerequisite for accurate correlation and alertingโ).
2. IBM QRadar SIEM Architecture and Deployment Guide, 7.4, Ch.4 โSystem Timeโ, pp.93-94 (โLog sources must use synchronized NTP to ensure events can be correlated across systemsโ).
3. RFC 5905: Mills et al., โNetwork Time Protocol Version 4โ, ยง1, p.3 (โAccurate clock synchronization is essential for distributed monitoring and intrusion detection systemsโ).
4. A. Katt, โChallenges in Event Correlation for Security Monitoring,โ Computers & Security, 2020, 99:102028, ยง4.1 (doi:10.1016/j.cose.2020.102028) โ discusses time synchronization as first requirement for SIEM correlation.
Question 12
Show Answer
A. The scanner is running without an agent installed.
The absence of an agent (agentless scanning) is not the direct cause; the intrusive method of the network-based scan is the cause. Agentless scans can be configured to be less intrusive.
C. The scanner is segmented improperly.
Improper network segmentation is an architectural flaw that might allow a scan to reach a critical server, but it does not explain why the scan itself caused the server to crash.
D. The scanner is configured with a scanning window.
A scanning window is a scheduling control used to minimize business impact. It dictates when a scan runs, not how it runs or the technical reason it might cause a system to fail.
1. National Institute of Standards and Technology (NIST). (2008). Special Publication 800-115, Technical Guide to Information Security Testing and Assessment.
Section 4.3, "Vulnerability Scanning," discusses the nature of these tools. It implicitly supports the answer by describing how scanners interact with target systems to find flaws. The guide notes that security testing, including scanning, carries a risk of "disruption of the services provided by the system," which directly aligns with an active scan crashing a server.
2. Scarfone, K., & Mell, P. (2008). NIST Special Publication 800-40 Revision 2, Guide to Enterprise Patch Management Technologies.
Section 3.2.1, "Active Scanners," states: "Active scanners can sometimes cause problems on hosts being scanned, such as crashing a host." This directly identifies active scanning as a potential cause for system crashes.
3. Du, W. (2019). Computer & Internet Security: A Hands-on Approach (2nd ed.). Syracuse University.
Chapter 20, "Vulnerability Assessment," describes how active vulnerability scanners work by sending specially crafted packets to probe for weaknesses. The text explains that these probes can sometimes cause the target services or even the entire operating system to crash due to bugs in the network stack or application code. This is a known risk of active scanning.
Question 13
Show Answer
B. Vulnerability score: This is a pre-incident metric that quantifies potential weaknesses; it does not help in prioritizing events that have already occurred during an active incident.
C. Mean time to detect: This is a key performance indicator (KPI) used to measure the overall effectiveness of a security program, not a criterion for prioritizing evidence within a specific investigation.
D. Isolation: This is a containment strategy or response action. It is a step taken after an investigation has provided enough evidence to justify it, not a factor used to prioritize the analysis of events.
1. NIST Special Publication 800-61 Rev. 2, Computer Security Incident Handling Guide. Section 3.2.3, "Incident Prioritization," states that prioritization is critical and is generally based on factors like functional impact (e.g., services are down) and informational impact (e.g., data was exfiltrated). This directly supports focusing on impact to guide the investigation.
2. Carnegie Mellon University, Software Engineering Institute, Defining the Process for Handling Computer Security Incidents. In the document's discussion of the Triage phase (CMU/SEI-99-TR-020, Section 3.1), the process involves assessing the incident's priority based on its "technical severity and the business impact," which aligns with focusing on the overall impact to move the investigation forward.
Question 14
Show Answer
A. Block the attacks using firewall rules.
Firewall rules are ineffective against large-scale DDoS attacks, as the source IPs are numerous and often spoofed, and the firewall itself can be overwhelmed.
B. Deploy an IPS in the perimeter network.
An on-premise Intrusion Prevention System (IPS) can be a bottleneck and its own state tables and processing capacity can be exhausted by a volumetric DDoS attack.
D. Implement a load balancer.
A load balancer distributes all incoming traffic, including the malicious DDoS traffic, which would still overwhelm the backend servers it is distributing to.
1. AWS. (2021). AWS Best Practices for DDoS Resiliency. AWS Whitepaper. On page 6, in the section "Reduce the attack surface," it states, "By using Amazon CloudFront (a CDN) and Amazon Route 53, you can leverage the AWS edge network to serve content and resolve DNS queries... This helps to protect your web applications from network and transport layer DDoS attacks."
2. Gkounis, D., & Anagnostopoulos, M. (2022). A Survey on Distributed Denial of Service (DDoS) Attacks and Defense Mechanisms in the Internet of Things (IoT) and Cloud Environment. Journal of Sensor and Actuator Networks, 11(4), 71. In Section 4.2, "Cloud-Based Defense," the paper discusses how cloud providers and CDNs offer DDoS mitigation services that leverage their vast network capacity to absorb and filter attack traffic before it reaches the customer's infrastructure. (https://doi.org/10.3390/jsan11040071)
3. Stallings, W. (2017). Cryptography and Network Security: Principles and Practice (7th ed.). Pearson. In Chapter 20, "Denial-of-Service Attacks," the text describes defenses against flooding attacks, noting that a common commercial solution involves services (like those provided by CDNs) that use a large, distributed network of "attack-mitigation devices" to filter traffic.
Question 15
Show Answer
B. Vulnerability assessment: This is a process for identifying and quantifying vulnerabilities. It provides input for a risk register but is not the tracking and management tool itself.
C. Penetration test: This is a simulated attack to discover and exploit vulnerabilities. Its findings are a source of data for risk management, not the tool for tracking it.
D. Compliance report: This document assesses and reports on adherence to specific regulations or standards, which is a subset of overall risk, not the comprehensive management tool.
1. National Institute of Standards and Technology (NIST). (2012). Guide for Conducting Risk Assessments (NIST Special Publication 800-30, Revision 1).
Section 2.2.3, "Vulnerability Identification," and Section 2.2.4, "Threat Identification," describe the inputs. Section 2.3, "Risk Determination," discusses analyzing likelihood and impact. The output of this entire process is documented and tracked in a risk register to inform risk response (Section 2.4).
2. International Organization for Standardization. (2018). ISO/IEC 27005:2018 Information technology โ Security techniques โ Information security risk management.
Clause 8.3, "Risk Treatment," outlines the process of developing and implementing a risk treatment plan. The results of risk assessment and the decisions for treatment are recorded, which is the function of a risk register.
3. Carnegie Mellon University, Software Engineering Institute. (1996). Continuous Risk Management Guidebook (CMU/SEI-96-HB-001).
Chapter 4, "Risk Analysis," describes the process of evaluating risks based on their probability (likelihood) and impact. Chapter 5, "Risk Planning," details how this information is used to create mitigation plans, which are then tracked. This entire lifecycle is managed within a risk database, also known as a risk register.
Question 16
Show Answer
A. Service-level agreement: An SLA defines service uptime and performance metrics. While avoiding an interruption helps meet an SLA, the direct inhibitor described by the change freeze is the prevention of the interruption itself.
C. Degrading functionality: This inhibitor applies when the patch or fix is known to cause performance issues or break features. The scenario does not state that the fix would degrade the application's functionality.
D. Proprietary system: This inhibitor occurs when the organization cannot modify the system because it lacks the source code or vendor support. The scenario implies the organization has control but has chosen to wait.
1. CompTIA Analyst (CS0-003) Exam Objectives. (2022). CompTIA.
Section 2.4: Explain the process of prioritizing vulnerabilities. This section explicitly lists "Inhibitors to remediation," which include "Business process interruption." The scenario provided is a classic example of this principle, where operational stability takes temporary precedence over patching.
2. NIST Special Publication 800-40 Revision 3. (2013). Guide to Enterprise Patch Management Technologies. National Institute of Standards and Technology.
Section 2.3.2, Patch Management Challenges, page 11: The document states, "Another challenge is that patching often causes system and application downtime, which may be unacceptable to the organization." This directly supports the concept that avoiding business interruption is a significant factor (inhibitor) in the remediation process.
3. Souppaya, M., & Scarfone, K. (2013). NIST Special Publication 800-53 Revision 4: Security and Privacy Controls for Federal Information Systems and Organizations. National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.800-53r4
Control SI-2, Flaw Remediation, page F-169: The discussion for this control emphasizes timely remediation but acknowledges operational constraints. Organizations must implement flaw remediation processes that "minimize the adverse impact on the organization's missions/business functions," which aligns with delaying a patch to avoid business process interruption.
Question 17
Show Answer
A. Running regular penetration tests is a detective and assessment process to identify vulnerabilities, not a control that actively compensates for a known, exploitable weakness in real-time.
B. Security awareness training is a preventive control aimed at human factors like social engineering, which does not directly address a technical authentication bypass vulnerability.
D. An Intrusion Detection System (IDS) is a detective control. It generates alerts on suspicious activity but does not prevent the access itself, failing to provide the preventive function of the failed primary control.
1. National Institute of Standards and Technology (NIST) Special Publication 800-53 Rev. 5, Security and Privacy Controls for Information Systems and Organizations. Appendix F, Glossary, page F-5, defines a compensating control as: "A control that is employed by an organization in lieu of a recommended control... and provides a similar level of protection for an information system." This supports option C, as an additional access control layer provides a similar level of protection to the failed primary one.
2. Purdue University, The Center for Education and Research in Information Assurance and Security (CERIAS), Introduction to Information Security - Lecture 5: Security Policies, Standards, and Controls. This courseware distinguishes between control types, explaining that compensating controls are alternatives used when a primary control is not feasible or is ineffective, which aligns with the scenario of a vulnerable primary control needing a backup.
3. CompTIA Analyst+ (CS0-003) Exam Objectives, Domain 1.0: Security Operations, Objective 1.4. This objective covers explaining the importance of vulnerability management, which includes implementing controls to mitigate identified vulnerabilities. The selection of an appropriate control type (compensating, in this case) is a key skill tested under this domain.
Question 18
Show Answer
B. Configure logging and monitoring to the SIEM.
This is a detective control that alerts on unauthorized access after it occurs, rather than a preventative measure that stops it from happening.
C. Deploy MFA to cloud storage locations.
MFA strengthens user authentication but is ineffective if the storage is misconfigured for public, unauthenticated access, which bypasses the authentication process entirely.
D. Roll out an IDS.
An Intrusion Detection System (IDS) is a detective control. It monitors and alerts on suspicious activity but does not actively block the traffic or fix the underlying network vulnerability.
1. National Institute of Standards and Technology (NIST) Special Publication 800-53 Revision 5, Security and Privacy Controls for Information Systems and Organizations.
Control SC-7, Boundary Protection: This control explicitly discusses the need to "monitor and control communications at the external boundary of the system and at key internal boundaries within the system." The discussion section further clarifies, "This control also addresses the segmentation of systems and system components." This directly supports using segmentation and access controls (like ACLs) to secure internal resources.
2. National Institute of Standards and Technology (NIST) Special Publication 800-125B, Secure Virtual Network Configuration for Virtual Machine (VM) Protection.
Section 4.2, Network Segmentation: "Network segmentation is the separation of a network into smaller, isolated networks... Segmentation can be used to isolate VMs from one another and from other resources, which can help to contain the impact of a security breach." This highlights segmentation as a primary security solution in cloud/virtual environments.
3. Purdue University, "Information Security Policy (VII.B.2)", Network Segmentation and Segregation.
Section 1, Standard: "Network segmentation and segregation will be used to control access to Sensitive and Restricted Data... Access Control Lists (ACLs) or other appropriate controls will be used to enforce the separation of network segments." This university policy document demonstrates the direct link between segmentation and ACLs as the standard solution for protecting sensitive data.
Question 19
Show Answer
B. RFI: Remote File Inclusion can lead to full server compromise, but SQL injection is a more direct and immediate threat specifically to the PII database.
C. XSS: Cross-Site Scripting primarily affects the user's browser (client-side) and is generally less critical than server-side vulnerabilities that can compromise all data at once.
D. Code injection: Similar to RFI, code injection can lead to server compromise, but SQL injection presents the most direct path for an attacker to exfiltrate the PII data.
1. OWASP Foundation. (2021). OWASP Top 10:2021. A03:2021-Injection. This standard lists injection flaws, including SQL injection, as a top security risk. The description notes that injection can result in "data loss or corruption" and that an application is vulnerable when hostile data is used directly in SQL queries, highlighting the direct threat to data.
2. Zeldovich, N. (2014). 6.858 Computer Systems Security, Fall 2014. Lecture 10: Web Security. Massachusetts Institute of Technology: MIT OpenCourseWare. Slide 23, "SQL Injection," explicitly states that the impact of this vulnerability is the ability to "Read/modify any data in database," confirming it as a direct threat to data stores like those containing PII.
3. National Institute of Standards and Technology. (2020). Security and Privacy Controls for Information Systems and Organizations (NIST Special Publication 800-53, Revision 5). Control SI-2, "Flaw Remediation," requires organizations to prioritize the remediation of flaws based on risk. In this scenario, the risk of mass PII exfiltration via a direct database attack (SQLi) represents the highest impact, thus demanding the highest priority. (DOI: https://doi.org/10.6028/NIST.SP.800-53r5)
Question 20
Show Answer
A. Nmap: Nmap is a network scanner used for discovering hosts and services. It is an active tool for probing networks, not for passively analyzing incoming traffic to diagnose an ongoing attack.
C. SIEM: A Security Information and Event Management (SIEM) system aggregates and correlates log data from multiple sources. While it might receive alerts about a DoS attack, it does not directly capture or analyze raw packets on the affected server.
D. EDR: An Endpoint Detection and Response (EDR) tool monitors endpoint activities like processes and file system changes. It is not designed for deep packet inspection of network traffic to diagnose a network-level DoS attack.
1. Paxson, V. (1997). Detecting and analyzing network probes. University of California, Berkeley. In Section 3, "Real-time Intrusion Detection," the use of tools like tcpdump is discussed for monitoring network traffic for suspicious patterns, which is the core activity required to identify a SYN flood. The paper highlights the necessity of packet-level analysis for such tasks.
2. Kurose, J. F., & Ross, K. W. (2017). Computer Networking: A Top-Down Approach (7th ed.). Pearson. Chapter 3, Section 3.7, "TCP Connection Management," describes the three-way handshake. The text explains that a SYN flood attack exploits this process by sending SYN segments but not the final ACK, consuming server resources. Diagnosing this requires observing the state of these handshakes at the packet level, for which a packet sniffer like TCPDump is the standard tool.
3. Carnegie Mellon University, Software Engineering Institute. (2001). UNIX Intrusion Detection Checklist. Version 2.0. Section "Check for signs of a SYN flooding," suggests using tools like netstat to see the half-open connections and packet capture utilities (like tcpdump) to analyze the traffic causing the condition.
4. Scarfone, K., & Mell, P. (2012). Guide to Intrusion Detection and Prevention Systems (IDPS). (NIST Special Publication 800-94). National Institute of Standards and Technology. Section 2.3.2, "Denial of Service (DoS) Attacks," describes SYN floods. The document explains that network-based IDPS sensors operate by analyzing network packets (p. 12), the same principle used by tcpdump for manual analysis.
Question 21
Show Answer
A. Static testing: This method, also known as Static Application Security Testing (SAST), requires access to the application's source code for analysis, which is explicitly unavailable in this scenario.
B. Vulnerability testing: While a valid black-box technique, this typically refers to automated scanning for known vulnerabilities. It is less comprehensive than a penetration test, which includes manual exploitation and analysis.
C. Dynamic testing: This is a correct category of testing for this scenario (DAST), but penetration testing is a more specific and comprehensive strategy that utilizes dynamic testing techniques as part of a broader, goal-oriented assessment.
1. National Institute of Standards and Technology (NIST). (2008). Special Publication 800-115, Technical Guide to Information Security Testing and Assessment.
Section 2.3, "Security Assessment Methodologies," differentiates between testing types. It describes penetration testing as a process that "mimics the actions of an attacker" to find and exploit vulnerabilities (p. 8).
Section 5.4.1, "Static Code Analysis," explicitly states that this technique involves "analyzing an applicationโs source code" (p. 58), making it unsuitable for this scenario.
Section 5.4.2, "Dynamic Code Analysis," describes testing a running application, which aligns with the scenario. However, penetration testing (Section 5.3) is presented as a more complete assessment engagement (p. 51).
2. OWASP Foundation. (2020). OWASP Web Security Testing Guide (WSTG), v4.2.
Section 4.1, "Introduction and Objectives," defines security testing methodologies. It contrasts automated vulnerability scanning with the depth of a manual penetration test, stating, "A penetration test is a goal-oriented process... It is not the same as a vulnerability assessment." This supports penetration testing as the more thorough evaluation strategy.
3. Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in Computing (5th ed.). Pearson Education.
Chapter 7, "Program Security," Section 7.4, "Targeted Evaluation," discusses different program analysis methods. It distinguishes static analysis (requiring code) from dynamic analysis (observing execution). It frames penetration testing as a form of active, dynamic analysis aimed at simulating attacks to find exploitable flaws, which is the most direct way to "evaluate the security" of a running system.
Question 22
Show Answer
A. Turning on systems is unsafe before containment is verified; it could activate the malware, causing data destruction or further spread.
B. This is an eradication step. It is premature to remove software before the incident is fully contained, identified, and analyzed.
C. This jumps to the recovery phase. Reimaging without first attempting to recover the required critical data would fail to meet the scenario's objectives.
D. Logging on to a compromised system, especially with administrative credentials, is extremely risky and could lead to credential theft or malware execution.
1. NIST Special Publication 800-61 Rev. 2, "Computer Security Incident Handling Guide": Section 3.3.2, "Containment," states, "Containment is the first step in the cycle after detection and analysis... Containment strategies can vary based on the type of incident. For example, the strategy for containing a network-based worm is to disconnect the affected hosts from the network." This supports isolating the department's segment as the primary action.
2. Carnegie Mellon University, Software Engineering Institute (SEI), "CSIRT Services": In the document outlining incident handling services, containment is described as a critical early step. It notes, "Containment includes actions to prevent the incident from spreading and causing further damage... This may involve isolating a network segment or disconnecting a system from the network." (CMU/SEI-2017-TR-010, Section 3.2.2).
3. SANS Institute, "The Six Steps of Incident Response": This widely accepted framework, based on the NIST model, places Containment immediately after Identification. The guide emphasizes that before any eradication or recovery, the incident must be contained to limit the damage. Isolating affected systems or network segments is a primary containment technique. (SANS Security Policy Templates, Incident Response, 2021).
Question 23
Show Answer
A. Risk assessment: This is a proactive process performed during the preparation phase to identify and evaluate potential threats and vulnerabilities, not an immediate action after an incident investigation.
C. Incident response plan: This plan is created during the preparation phase. While it is updated based on lessons learned after an incident, the analytical action performed is the root cause analysis that informs those updates.
D. Tabletop exercise: This is a preparatory activity used to train staff and test the incident response plan's effectiveness before a real incident occurs, not a reactive measure taken after one has been investigated.
---
1. National Institute of Standards and Technology (NIST). (2012). Special Publication 800-61 Rev. 2: Computer Security Incident Handling Guide.
Section 3.4, "Post-Incident Activity," states, "Holding a 'lessons learned' meeting with all involved parties after a major incident is a helpful way to improve security measures and the incident handling process itself... The meeting provides a chance to... discuss what was done right, what was done wrong, and how to improve in the future." This process of determining how to improve is fundamentally based on analyzing the root cause of the incident.
2. CompTIA. (2022). CompTIA Cybersecurity Analyst (CySA+) CS0-003 Exam Objectives.
Domain 4.0, Objective 4.3, "Summarize post-incident activities," lists "Lessons learned report" and "Update incident response plan/playbooks." A root cause analysis is the prerequisite analytical step required to generate a meaningful lessons-learned report and determine what updates are necessary.
3. Carnegie Mellon University, Software Engineering Institute. (2016). Defining the 'Follow-Up' Phase of the Incident Management Process.
The document describes the "Follow-Up" or post-incident phase, stating its purpose is "analyzing the incident and its handling, with a view to improving the organization's incident management capability and preventing the incident from recurring." This directly aligns with the objective of a root cause analysis.
Question 24
An analyst has received an IPS event notification from the SIEM stating an IP address, which is known to be malicious, has attempted to exploit a zero-day vulnerability on several web servers. The exploit contained the following snippet:
/wp-json/trx_addons/V2/get/sc_layout?sc=wp_insert_user&role=administrator
Which of the following controls would work best to mitigate the attack represented by this snippet?
Show Answer
B. Limit layout creation to administrators only.
This is incorrect because the attack's payload is wpinsertuser, not layout creation. The sclayout component is merely the vulnerable API endpoint used to pass the malicious command.
C. Set the directory trxaddons to read only for all users.
This is ineffective. Setting a directory to read-only prevents modification of the files within it but does not prevent the web server from reading and executing the vulnerable scripts.
D. Set the directory v2 to read only for all users.
This is also ineffective for the same reason as option C. Filesystem read-only permissions do not prevent the execution of existing server-side code that contains the vulnerability.
---
1. OWASP Foundation. (2021). OWASP Top 10:2021 A01:2021 โ Broken Access Control. OWASP. Retrieved from https://owasp.org/Top10/A012021-BrokenAccessControl/.
Reference Details: This vulnerability is a direct example of Broken Access Control. The mitigation guidance states, "Access control is only effective if enforced in trusted server-side code or server-less API, where the attacker cannot modify the access control check or metadata." This aligns with enforcing rules about who can create users at the application level.
2. WordPress.org. (n.d.). Hardening WordPress. WordPress Codex. Retrieved from https://wordpress.org/support/article/hardening-wordpress/#security-through-obscurity.
Reference Details: In the "Roles and Capabilities" section (implicitly covered under the principle of least privilege), WordPress documentation outlines that only users with the createusers capability (by default, only Administrators) should be able to create new users. The exploit bypasses this, and the mitigation is to ensure this control is properly enforced.
3. Zheng, X., & Zhang, Y. (2022). A Comprehensive Survey on WordPress Security. ACM Computing Surveys, 55(8), 1-37. https://doi.org/10.1145/3543823.
Reference Details: Section 3.1, "Privilege Escalation," discusses vulnerabilities where attackers gain higher privileges. It notes, "The most common way is to create a new administrator account... The fundamental solution is to strictly check the userโs privilege before performing any sensitive operations." This academic source confirms that enforcing privilege checks for user creation is the correct mitigation strategy.
Question 25
Show Answer
B. vote.4p โ Requires elevated privileges; after admin removal, exploitability drops sharply (PR weight โค0.50).
C. sweet.bike โ Attack Vector is Local/Physical; AV weight โค0.62, lowering exploitability.
D. great.skills โ Requires user interaction (UI:Required, 0.62) and/or higher privileges, reducing exploitability compared with โnessie.explosion.โ
1. FIRST. โCommon Vulnerability Scoring System v3.1 Specification,โ Sect. 2.1โ2.3, Tables 2 & 6 (weights for AV, AC, PR, UI). https://www.first.org/cvss/v3.1/specification-document (pp. 7-10).
2. Cichonski et al., NIST SP 800-115 Rev. 1 Draft, โTechnical Guide to Information Security Testing,โ ยง3.3 (privilege removal impact on exploitability).
3. MIT OpenCourseWare, 6.858 โComputer Systems Security,โ Lecture 6 notes, Principle of Least Privilege and its effect on vulnerability severity.
Question 26
Show Answer
B. Create a compensating control item until the system can be fully patched.
This is a temporary risk mitigation strategy, not a remediation plan. It does not address the core requirement of patching the vulnerabilities as stipulated by the SLA.
C. Accept the risk and decommission current assets as end of life.
This is an extreme risk treatment option. Decommissioning a large number of assets is a major business decision and is not a practical or standard response to patching requirements.
D. Request an exception and manually patch each system.
Requesting an exception directly contradicts the goal of adhering to the SLA. While patching is manual, this option lacks a scalable management and tracking mechanism for a large number of findings.
---
1. National Institute of Standards and Technology (NIST) Special Publication 800-40r4 (Draft), Guide to Enterprise Patch Management Planning: Preventive Maintenance for Technology.
Reference: Section 3.3, "Patch Management Process," discusses the remediation phase. It emphasizes the need for a structured, repeatable process for applying and validating patches. The document states, "Organizations should have a documented process for patch installation... This process should include steps for scheduling, installation, verification, and documentation." A ticketing system is a primary tool for implementing and documenting such a process.
2. CompTIA Analyst+ (CS0-003) Exam Objectives.
Reference: Domain 2.0, "Vulnerability Management," Objective 2.3, "Explain the process of vulnerability management." This objective covers the lifecycle steps of Discovery, Prioritization, Remediation, and Reporting. A ticketing system is a key operational tool that facilitates the remediation and reporting phases by providing a formal mechanism to track work from initiation to closure, ensuring compliance with policies and SLAs.
3. Carnegie Mellon University, Software Engineering Institute, Vulnerability Management.
Reference: In discussions of operational vulnerability management, the process of remediation is detailed. The workflow often involves creating a "trouble ticket" or work order for each vulnerability that needs to be addressed. This ticket is then used to track all actions taken to remediate the vulnerability, ensuring a complete audit trail and verification of compliance with remediation timelines (SLAs). This is described in various CERT/CC publications and operational guides. For example, the concept is foundational in the "Operationalizing Threat Intelligence" courseware.
Question 27
Show Answer
B. SOAR: SOAR (Security Orchestration, Automation, and Response) platforms primarily focus on automating the response to alerts (often received from a SIEM), not the initial correlation and notification.
C. IPS: An Intrusion Prevention System (IPS) is a network security tool that inspects traffic and blocks threats in real-time; it does not correlate data from diverse, non-network sources.
D. CERT: A CERT (Computer Emergency Response Team) is a group of people responsible for responding to security incidents; it is a human team, not a technology.
1. National Institute of Standards and Technology (NIST). (2006). Special Publication 800-92, Guide to Computer Security Log Management.
Section 2.3, "Log Management Infrastructures," describes the functions of centralized logging systems, which are the foundation of a SIEM. It states, "Centralized log management provides a way to automate the log management process and to more easily correlate events that are recorded in different logs," directly supporting the correlation function mentioned in the question.
2. Tounsi, W., & Rais, H. (2018). Security orchestration, automation and response (SOAR) from a technical perspective. 2018 9th International Conference on anes and Communication Systems (ICACS), 1-6.
Section III.A, "SIEM," defines a SIEM as a system that "collects and analyzes security alerts, logs and other real-time and historical data from security devices, network infrastructure, systems and applications." This paper also clarifies that SOAR platforms are a "downstream security solution that is complementary to SIEM systems," confirming that the described functions precede SOAR's role. (DOI: https://doi.org/10.1109/ACS.2018.8586228)
3. Purdue University. (n.d.). Information Security Policy (VII.B.8).
Section "Procedures," Subsection "Security Information and Event Management (SIEM)," outlines the university's use of SIEM. It states the purpose is to "collect and aggregate log data... for the purposes of analysis and reporting on security-related events," which aligns with the system's functions in the question. This demonstrates the real-world application and definition of SIEM in an institutional policy context.
Question 28
Show Answer
A. Dictionary attack: This is a password-guessing attack. The logs show successful MFA, indicating the password was already compromised, not being actively guessed.
D. Subscriber identity module swapping: While a method to bypass SMS-based MFA, it does not inherently explain the multiple, rapid, geographically dispersed successful logins shown in the logs.
E. Rogue access point: This is a localized network attack and cannot explain simultaneous successful logins from three different continents.
F. Password spray: This attack involves trying one password against many accounts, whereas the logs show multiple successful logins for a single account.
1. Microsoft Corporation. (2023). What is risk? - Microsoft Entra. Microsoft Learn. In the "Risk detections in Microsoft Entra ID Protection" section, "Impossible travel" is defined as a risk detection type that flags sign-ins from geographically distant locations occurring in a time period shorter than the time it would have taken the user to travel between them. This directly corresponds to the log evidence. (Reference: learn.microsoft.com/en-us/entra/id-protection/concept-identity-protection-risks, Section: "Risk detections in Microsoft Entra ID Protection").
2. National Institute of Standards and Technology (NIST). (2017). NIST Special Publication 800-63B: Digital Identity Guidelines, Authentication and Lifecycle Management. In Section 5.1.1, "Authentication Process," the document states that verifiers should analyze the context of authentication requests for risk signals, which includes anomalous locations or velocities. (DOI: https://doi.org/10.6028/NIST.SP.800-63b, Page 17, Section 5.1.1).
3. Carnegie Mellon University. (2022). Duo MFA Push Harassment. Information Security Office. This university documentation describes MFA Push Harassment (also known as MFA Fatigue or Push Phishing) as a technique where attackers "send a flood of push notifications to a userโs mobile device, hoping the user will accept a prompt to allow the attacker to gain access." This matches the likely method used to achieve the multiple successful MFA logins. (Reference: cmu.edu/iso/news/2022/duo-mfa-push-harassment.html).
Question 29
Show Answer
A. Documentation is a continuous activity that records findings as they are discovered; it is not the initial analytical step to find the root cause.
B. Interviews provide valuable context but should be conducted after an initial review of technical evidence (logs) to ask more targeted and informed questions.
D. The scenario states the attacker was "detected and blocked," implying containment actions have already been taken. RCA is a post-containment activity focused on "how" it happened.
1. National Institute of Standards and Technology (NIST). (2012). Special Publication 800-61 Rev. 2, Computer Security Incident Handling Guide.
Section 3.2, "Detection and Analysis," details the process of analyzing data from various sources, with logs being a primary component, to understand an incident. This analysis is the prerequisite for the post-incident activities, including root cause determination.
Section 3.4, "Post-Incident Activity," explains that a key part of this phase is to perform a root cause analysis using the data collected during the investigation to prevent future occurrences.
2. Kent, K., & Souppaya, M. (2006). NIST Special Publication 800-86, Guide to Integrating Forensic Techniques into Incident Response.
Section 3.2, "Collection," emphasizes that the initial step in a forensic investigation (a key part of RCA) is collecting volatile and non-volatile data, which includes system, security, and application logs, to build a timeline of events.
3. Carnegie Mellon University, Software Engineering Institute. (2017). Defining the Required Set of Digital Forensic Capabilities for a Security Operations Center (SOC). (CMU/SEI-2017-TN-009).
Section 3.2, "Analysis," describes the analysis phase where investigators examine collected artifacts, such as log files and disk images, to "determine the root cause of an incident" (p. 10). This confirms that log analysis is a fundamental step in the RCA process.
Question 30
Show Answer
A. Back up the configuration file for all network devices: This is a containment or data preservation step, not the initial documentation of the physical scene. It should be performed after the scene is documented.
B. Record and validate each connection: This is a more intrusive and time-consuming process that could alter the state of the evidence. It should be done after initial, non-intrusive documentation like photography.
C. Create a full diagram of the network infrastructure: A network diagram is typically a pre-existing document. Creating one during an incident is not the primary method for documenting the immediate, physical state of the impacted items.
1. National Institute of Standards and Technology (NIST) Special Publication 800-86, Guide to Integrating Forensic Techniques into Incident Response. Section 3.2, "Collection," emphasizes the need for a standardized process for gathering data, which includes "documenting where, when, and how the evidence was collected." For a physical scene, photography is a primary method for documenting the "where" and "how" before evidence is physically handled.
2. National Institute of Standards and Technology (NIST) Special Publication 800-61 Rev. 2, Computer Security Incident Handling Guide. Section 3.3.2, "Evidence Gathering and Handling," discusses the importance of preserving evidence and maintaining a chain of custody. Documenting the original state of the evidence, which includes the physical environment, is a foundational step in this process.
3. Carnegie Mellon University, Software Engineering Institute, Best Practices in Digital Evidence Collection. Document CMU/SEI-2006-TN-029. Section 3.1, "Secure the Scene," states, "The first responder should photograph or videotape the entire scene before touching or moving anything." This highlights photography as a critical first step in documenting a physical scene related to a digital incident.
Question 31
While reviewing the web server logs a security analyst notices the following snippet
..\../..\../boot.ini
Which of the following is being attempted?
Show Answer
B. Remote file inclusion: This attack involves tricking an application into including a file from an external URL, not navigating the local file system with ../.
C. Cross-site scripting: This is an injection attack that involves inserting malicious scripts (e.g., JavaScript) into web pages, which is not present in the log snippet.
D. Remote code execution: While a successful file access could potentially lead to RCE, the log itself only shows an attempt to read a file, not execute arbitrary commands.
E. Enumeration of /etc/passwd: This is a specific goal of directory traversal on Linux/Unix systems. The log targets boot.ini, which is a Windows-specific file.
---
1. MITRE. (2023). CWE-22: Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal'). Common Weakness Enumeration. Retrieved from https://cwe.mitre.org/data/definitions/22.html. The CWE-22 entry describes this exact weakness, where an attacker uses sequences like ../ to access files outside of the intended directory, providing examples such as .../../.../../etc/passwd.
2. OWASP Foundation. (n.d.). Path Traversal. OWASP Cheat Sheet Series. Retrieved from https://cheatsheetseries.owasp.org/cheatsheets/PathTraversalCheatSheet.html. This official guide explicitly defines the attack and states, "The ../ characters are a file system directive that means 'go up one directory'," showing how it is used to access restricted files.
3. Vick, P. (2014). Web Application Security. Courseware, CS 461, University of Illinois Urbana-Champaign. Retrieved from https://courses.engr.illinois.edu/cs461/sp2014/lectures/20-webappsecurity.pdf. Slide 22 ("Path Traversal") provides a clear example: GET /getimage?name=../../../../etc/passwd, which uses the same ../ technique shown in the question to access a system file.
Question 32
A manufacturer has hired a third-party consultant to assess the security of an OT network that includes both fragile and legacy equipment. Which of the following must be considered to ensure the consultant does no harm to operations?
Show Answer
A. Employing Nmap Scripting Engine scanning techniques: Nmap is an active scanning tool, and its scripting engine can be particularly aggressive. This is highly likely to cause instability or failure in fragile OT systems.
B. Preserving the state of PLC ladder logic prior to scanning: This is a recovery action, not a preventative one. The primary goal is to avoid causing harm in the first place, which this option does not address.
D. Running scans during off-peak manufacturing hours: This mitigates the impact of a potential failure but does not prevent the scan from causing the failure. Many OT systems operate 24/7, and even a brief disruption can be catastrophic.
---
1. National Institute of Standards and Technology (NIST). (2015). Guide to Industrial Control Systems (ICS) Security (NIST Special Publication 800-82, Revision 2).
Section 5.3.2.2, Vulnerability Scanning, Page 101: "Active scanning on an operational ICS network is often discouraged because it can send unforeseen traffic to the control devices, which can cause them to fail... Passive vulnerability scanning tools are available that can be used on an operational ICS network without the risk of interfering with the control devices."
2. Cybersecurity and Infrastructure Security Agency (CISA). (2018). Recommended Practice: Improving Industrial Control System Cybersecurity with Defense-in-Depth Strategies.
Section 3.3, Identify and Understand Vulnerabilities, Page 11: The document emphasizes the need for careful planning when performing assessments on live ICS environments, stating, "Passive network monitoring and asset discovery tools can be used to identify vulnerabilities without impacting the operational network."
3. Krotofil, M., & Larsen, J. (2016). Passive approach to assessing security of industrial control systems. In 2016 24th Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (PDP) (pp. 631-635). IEEE.
Abstract & Section II.A: The paper highlights the risks of active scanning in ICS, noting that "even a simple port scan can lead to a denial of service." It advocates for passive analysis as a safe alternative for discovering network topology and identifying potential vulnerabilities without interacting with sensitive devices. DOI: 10.1109/PDP.2016.61
Question 33
A cybersecurity analyst is recording the following details * ID * Name * Description * Classification of information * Responsible party. In which of the following documents is the analyst recording this information?
Show Answer
B. Change control documentation: This tracks the lifecycle of changes to IT systems, focusing on the change itself, its justification, and implementation plan, not on cataloging organizational risks.
C. Incident response playbook: This is a tactical, step-by-step guide for handling a specific type of security incident. It contains procedures, not a log of risks with classifications and owners.
D. Incident response plan: This is a high-level, strategic document outlining the overall framework, roles, and responsibilities for handling incidents, not a detailed log of individual risks.
1. National Institute of Standards and Technology (NIST). (2012). Guide for Conducting Risk Assessments (NIST Special Publication 800-30, Revision 1).
Section 3.4, "Risk Response," and Appendix H, "Risk Register Example": These sections describe the process of documenting risks. The example risk register in Appendix H includes fields for Risk ID, Risk Description, Impact, and Risk Owner (Responsible Party), which directly correspond to the information being recorded by the analyst in the question.
2. National Institute of Standards and Technology (NIST). (2018). Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy (NIST Special Publication 800-37, Revision 2).
Section 2.6, "MONITOR Step": This section discusses the continuous monitoring of risks. It states, "Risk monitoring is part of the overall organizational risk management process and involves, for example... tracking identified risks... The results of risk monitoring are used to update the risk register." This confirms the role of the risk register in tracking identified risks and their attributes.
3. The MITRE Corporation. (2021). Cyber Resiliency Engineering Framework (NIST Special Publication 800-160, Volume 2, Revision 1).
Section 3.3.2, "Risk Assessment": This section discusses identifying and documenting risks. It notes that the output of this process is a "list of risks to be entered into the risk register," reinforcing that the described activity is part of creating or maintaining this specific document.
Question 34
Show Answer
A. Acquire a copy of taskhw.exe from the impacted host: This is a forensic analysis step that should occur after initial triage confirms the file is likely malicious, not as the very first action.
B. Scan the enterprise to identify other systems with taskhw.exe present: This scoping action is premature. The analyst should first gather intelligence to confirm the file is malicious before initiating a resource-intensive enterprise-wide scan.
D. Change the account that runs the -caskhw. exe scheduled task: This is a containment or remediation action. Taking such steps without first confirming the nature of the threat is improper procedure and could have unintended consequences.
1. NIST Special Publication 800-61 Rev. 2, "Computer Security Incident Handling Guide": Section 3.2.2, "Analysis," states that after detecting an indicator, an analyst should investigate to determine its nature. The guide mentions that this process may involve "using search engines to look for information related to the indicators" (p. 22). This supports performing a public search as an initial analysis step.
2. MITRE ATT&CKยฎ Framework: The scenario describes Technique T1053.005, "Scheduled Task/Job: Scheduled Task." The detection guidance for this technique involves identifying tasks with unusual properties, such as executing from uncommon directories. The logical step following detection of such an anomaly is analysis, which begins with gathering intelligence on the observed artifacts.
3. Purdue University, "Incident Response" Courseware (CS49000-IR): Incident response methodologies taught in academic settings emphasize a phased approach. The initial analysis phase, following detection, focuses on validating and triaging the alert. This includes researching indicators of compromise (like file names and paths) using external threat intelligence sources before proceeding to deeper analysis or containment. This aligns with performing a public search first.
Question 35
Show Answer
B. Authentication: Authentication verifies a user's identity to a system (e.g., via a password), but it does not inherently provide undeniable, transferable proof of a specific action for a third party.
C. Authorization: Authorization defines the permissions an authenticated user has (e.g., read or write access). It is unrelated to proving the origin of a message.
D. Integrity: Integrity ensures that a message has not been altered. While digital signatures also provide integrity, the primary goal described is proving the sender's identity, not the message's unchanged state.
1. National Institute of Standards and Technology (NIST). (2021). Security and Privacy Controls for Information Systems and Organizations (SP 800-53, Rev. 5). In Appendix F, Security and Privacy Control Baselines, the control family for Identification and Authentication (IA) is distinct from controls that support non-repudiation, which are often implemented cryptographically. The NIST Glossary defines non-repudiation as: "Assurance that the sender of information is provided with proof of delivery and the recipient is provided with proof of the senderโs identity, so neither can later deny having processed the information."
Source: NIST Computer Security Resource Center (CSRC) Glossary, entry for "non-repudiation".
2. Shirey, R. (2007). Internet Security Glossary, Version 2 (RFC 4949). The Internet Engineering Task Force (IETF). This document defines non-repudiation as a security service that provides proof of origin or proof of delivery.
Source: Section 2, "Definitions," page 207, states: "non-repudiation service: A security service that provides proof of the origin of data or proof of the delivery of data."
3. Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in Computing (5th ed.). Pearson Education. In Chapter 1, "Is There a Security Problem in Computing?", the text distinguishes between fundamental security goals.
Source: Section 1.2, "Basic Components of Security," page 10, explains that non-repudiation is the "inability to deny a deed," which is distinct from authentication (verifying identity) and integrity (ensuring data is unaltered).
4. Kurose, J. F., & Ross, K. W. (2017). Computer Networking: A Top-Down Approach (7th ed.). Pearson Education. The textbook discusses cryptographic principles for network security.
Source: Chapter 8, "Security in Computer Networks," Section 8.2, "Principles of Cryptography," explains that digital signatures provide non-repudiation because only the holder of the private key could have created the signature, offering proof to a third party.