Free Practice Test

Free CYSA+ Practice Test CS0-003 – 2025 Updated

Get ready for your CS0-003 exam with our free, accurate, and 2025-updated questions.

Cert Empire is committed to providing the best and latest exam questions for those preparing for the CompTIA CS0-003 exam. To assist students, we’ve made some of our CS0-003 exam prep resources free. You can get plenty of practice with our Free CS0-003 Practice Test.

Question 1

Which of the following statements best describes the MITRE ATT&CK framework?
Options
A: It provides a comprehensive method to test the security of applications.
B: It provides threat intelligence sharing and development of action and mitigation strategies.
C: It helps identify and stop enemy activity by highlighting the areas where an attacker functions.
D: It tracks and understands threats and is an open-source project that evolves.
E: It breaks down intrusions into a clearly defined sequence of phases.
Show Answer
Correct Answer:
It tracks and understands threats and is an open-source project that evolves.
Explanation
The MITRE ATT&CKยฎ framework is best described as a globally accessible, curated knowledge base of adversary tactics, techniques, and procedures (TTPs) based on real-world observations. Its primary purpose is to serve as a foundation for threat modeling and methodology, enabling organizations to track and better understand adversary behaviors. It is an open-source project that is continuously updated and evolves with contributions from the global cybersecurity community, ensuring it remains relevant against emerging threats. This dynamic nature is a defining characteristic of the framework.
Why Incorrect Options are Wrong

A. This describes application security testing (AST) methodologies like SAST or DAST. While ATT&CK can inform such tests, it is not a testing method itself.

B. This is a better description of an Information Sharing and Analysis Center (ISAC) or a threat intelligence platform (TIP), which focus on the sharing and dissemination of intelligence.

C. This describes a primary use case or outcome of applying the ATT&CK framework, rather than describing the fundamental nature of the framework itself, which is a knowledge base.

E. This accurately describes the Lockheed Martin Cyber Kill Chainยฎ, which models an intrusion as a linear sequence of phases, unlike the ATT&CK matrix, which is non-sequential.

References

1. The MITRE Corporation. (2023). About ATT&CK. MITRE ATT&CKยฎ. Retrieved from https://attack.mitre.org/resources/getting-started/. In the "What is ATT&CK?" section, it is defined as "a globally-accessible knowledge base of adversary tactics and techniques based on real-world observations." This supports the "tracks and understands threats" aspect. The community-driven and evolving nature is also a central theme.

2. NIST. (2021). Special Publication 800-160, Volume 2, Revision 1: Developing Cyber-Resilient Systems: A Systems Security Engineering Approach. National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.800-160v2r1. In Appendix F, Section F.3, ATT&CK is described as a "curated knowledge base and model for cyber adversary behavior" used to "characterize and describe adversary behaviors." This aligns with the concept of a tool to track and understand threats.

3. Applebaum, A. (2020). A Survey of the MITRE ATT&CK Framework. SANS Institute Reading Room. Retrieved from https://www.sans.org/white-papers/39390/. On page 4, the paper states, "The ATT&CK framework is a knowledge base of adversary behavior and a model for describing the actions an adversary may take... It is a living, community-driven knowledge base that is continuously updated..." This directly supports the description of an evolving, open project for understanding threats.

Question 2

Which of the following entities should an incident manager work with to ensure correct processes are adhered to when communicating incident reporting to the general public, as a best practice? (Select two).
Options
A: Law enforcement
B: Governance
C: Legal
D: Manager
E: Public relations
F: Human resources
Show Answer
Correct Answer:
Legal, Public relations
Explanation
When communicating an incident to the general public, an incident manager must collaborate with specialized teams to ensure the message is both legally sound and effectively managed. The Legal department is critical for reviewing all external communications to ensure compliance with data breach notification laws and to mitigate legal liability. The Public Relations department is responsible for crafting the message, managing media inquiries, and preserving the organization's reputation. This dual-pronged approach ensures that public statements are accurate, compliant, and strategically delivered to maintain public trust.
Why Incorrect Options are Wrong

A. Law enforcement: Law enforcement is an external agency to be notified if a crime has occurred, not an internal entity that approves the organization's public communication process.

B. Governance: Governance provides the high-level framework and policies, but the specific, operational task of crafting and approving public statements falls to legal and PR teams.

D. Manager: This option is too vague. The incident manager is a manager who coordinates with other specific functional leads, such as the heads of legal and public relations.

F. Human resources: Human resources primarily handles internal communications and personnel-related matters, not external communications with the general public regarding a security incident.

References

1. National Institute of Standards and Technology (NIST). (2012). Special Publication 800-61 Rev. 2, Computer Security Incident Handling Guide.

Section 2.4.3, "Relationships with Other Groups," states, "The CSIRT should also have a close relationship with the organizationโ€™s general counsel and public affairs offices. The general counsel can provide advice on legal issues... Public affairs can handle the media, which is particularly important during a high-profile incident." This directly supports the involvement of Legal (general counsel) and Public Relations (public affairs).

2. University of Washington. (2023). UW-IT Information Security and Privacy: Incident Response Plan.

Section "Incident Response Team," under the subsection for "External Communications," explicitly lists "University Marketing & Communications" (the public relations function) and the "Office of the Attorney General" (the legal function) as the primary entities responsible for coordinating and approving communications with the media and the public.

3. Solove, D. J., & Citron, D. K. (2017). Risk and Anxiety: A Theory of Data-Breach Harms. The George Washington University Law School Public Law and Legal Theory Paper No. 2017-10.

Section IV.B, "The Response to a Data Breach," discusses the institutional response, emphasizing that "companies often hire public relations firms to help them manage the crisis" and that legal counsel is central to navigating the complex web of state and federal notification laws. This academic source underscores the essential roles of both PR and legal teams. (Available via SSRN and university repositories).

Question 3

A security analyst observed the following activity from a privileged account: . Accessing emails and sensitive information . Audit logs being modified . Abnormal log-in times Which of the following best describes the observed activity?
Options
A: Irregular peer-to-peer communication
B: Unauthorized privileges
C: Rogue devices on the network
D: Insider attack
Show Answer
Correct Answer:
Insider attack
Explanation
The observed activities are classic indicators of an insider attack. A privileged account, which has legitimate, high-level access, is being used for malicious purposes. Accessing sensitive information unrelated to job duties, modifying audit logs to conceal actions, and logging in at abnormal times are all hallmark behaviors of an insider threat. This threat could be a malicious employee or an external attacker who has compromised an insider's credentials and is masquerading as them. The core issue is the abuse of authorized, privileged access.
Why Incorrect Options are Wrong

A. Irregular peer-to-peer communication: The evidence describes data access and log manipulation, not a specific network communication pattern like P2P file sharing.

B. Unauthorized privileges: The account is described as "privileged," meaning it already has high-level access. The issue is the abuse of existing privileges, not the acquisition of new, unauthorized ones.

C. Rogue devices on the network: The activity is tied to a user account, not an unauthorized piece of hardware. There is no information suggesting a new or unknown device is present.

References

1. National Institute of Standards and Technology (NIST). (2020). NIST Special Publication 800-53 Rev. 5: Security and Privacy Controls for Information Systems and Organizations.

Reference: Appendix F, Security Control Catalog, AU-11 (Audit Record Retention), discusses the importance of protecting audit logs from unauthorized modification. The scenario's "audit logs being modified" is a direct violation of this principle and a key indicator of an attempt to cover tracks, common in insider attacks.

2. Cappelli, D. M., Moore, A. P., & Trzeciak, R. F. (2012). The CERT Guide to Insider Threats: How to Prevent, Detect, and Respond to Information Technology Sabotage (Theft, Fraud). Addison-Wesley Professional.

Reference: Chapter 3, "A Closer Look at the Malicious Insider," details common indicators. It explicitly lists technical indicators such as "Abuse of privileges" and behavioral indicators like "Working odd hours without authorization," which directly correspond to the activities observed in the scenario.

3. Carnegie Mellon University, Software Engineering Institute. (2018). Common Sense Guide to Mitigating Insider Threats, Sixth Edition.

Reference: Page 15, Practice 4: "Monitor and respond to suspicious or disruptive behavior." This guide lists "unusual remote access" and "accessing sensitive information not associated with their job" as key indicators. The modification of logs is described as an attempt to "conceal their actions."

4. Zwicky, E. D., Cooper, S., & Chapman, D. B. (2000). Building Internet Firewalls, 2nd Edition. O'Reilly & Associates. (A foundational text often used in university curricula).

Reference: Chapter 26, "Responding to Security Incidents," describes patterns of intrusion. It notes that attackers, including insiders, often attempt to "cover their tracks" by altering logs and that unusual login times are a primary indicator of a compromised account or malicious insider activity.

Question 4

A penetration tester submitted data to a form in a web application, which enabled the penetration tester to retrieve user credentials. Which of the following should be recommended for remediation of this application vulnerability?
Options
A: Implementing multifactor authentication on the server OS
B: Hashing user passwords on the web application
C: Performing input validation before allowing submission
D: Segmenting the network between the users and the web server
Show Answer
Correct Answer:
Performing input validation before allowing submission
Explanation
The ability to supply crafted data to a web form and subsequently extract user credentials is characteristic of an injection-class vulnerability (e.g., SQL injection). The primary defense recommended by government and academic security guidance is to enforce rigorous server-side input validation (and associated sanitization/parameterization) before the application processes or stores user-supplied data. Implementing such validation prevents malicious input from being interpreted as executable commands, thereby blocking credential disclosure.
Why Incorrect Options are Wrong

A. Multifactor authentication on the server OS protects logons to the host, not the web application code path exploited by the form.

B. Hashing passwords at rest limits post-compromise damage but does not stop an attacker from exploiting the form to read data before hashing occurs.

D. Network segmentation limits lateral movement; it does not address the direct flaw inside the application logic that allows credential extraction.

References

1. NIST Special Publication 800-53 Rev.5, โ€œSystem and Information Integrity โ€“ SI-10: Input Validation,โ€ pp. 413-414.

2. NIST Special Publication 800-115, โ€œTechnical Guide to Information Security Testing and Assessment,โ€ ยง4.3.3 (Injection Attacks) โ€“ recommends input validation to mitigate.

3. MIT OpenCourseWare, 6.858 โ€œComputer Systems Securityโ€ Lecture 13: SQL Injection, slides 20-22 โ€“ emphasizes sanitization/validation of user input as the primary fix.

4. Viega, J., & McGraw, G. (2019). โ€œBuilding Secure Software,โ€ Addison-Wesley, Ch. 5, pp. 127-130 โ€“ lists input validation as foundational for preventing credential-stealing injections.

Question 5

During a security test, a security analyst found a critical application with a buffer overflow vulnerability. Which of the following would be best to mitigate the vulnerability at the application level?
Options
A: Perform OS hardening.
B: Implement input validation.
C: Update third-party dependencies.
D: Configure address space layout randomization.
Show Answer
Correct Answer:
Implement input validation.
Explanation
A buffer overflow occurs when an application attempts to write more data to a memory buffer than it can hold, overwriting adjacent memory. The most effective mitigation at the application level is to implement robust input validation. This secure coding practice involves checking all data received by the application for proper type, length, and format before it is processed. By ensuring that input does not exceed the buffer's allocated size, input validation directly prevents the overflow condition from occurring, thus addressing the root cause of the vulnerability within the application's code.
Why Incorrect Options are Wrong

A. Perform OS hardening.

This is a system-level, not an application-level, mitigation. It strengthens the operating system but does not fix the underlying coding flaw in the application itself.

C. Update third-party dependencies.

This is only effective if the buffer overflow vulnerability exists within a third-party library the application uses, not in the application's own custom code.

D. Configure address space layout randomization.

Address Space Layout Randomization (ASLR) is an OS-level memory-protection feature that makes exploitation more difficult but does not prevent the buffer overflow from happening.

---

References

1. National Institute of Standards and Technology (NIST). (2020). Security and Privacy Controls for Information Systems and Organizations (Special Publication 800-53, Revision 5).

Reference: Control SI-10, "Information Input Validation."

Quote/Paraphrase: The documentation for this control explicitly states that input validation is used to protect against many threats, including "buffer overflows." It emphasizes checking input for validity against defined requirements before it is processed by the application.

2. Kaashoek, M. F., & Zeldovich, N. (2014). 6.858 Computer Systems Security, Fall 2014 Lecture Notes. MIT OpenCourseWare.

Reference: Lecture 2: "Control-flow attacks and defenses."

Quote/Paraphrase: The lecture notes discuss defenses against buffer overflows, highlighting the importance of "checking buffer bounds" before writing data. This bounds checking is a core component of input validation and is presented as a direct countermeasure to prevent the overflow from occurring at the source code level.

3. Dowd, M., McDonald, J., & Schuh, J. (2006). The Art of Software Security Assessment: Identifying and Preventing Software Vulnerabilities. Addison-Wesley Professional.

Reference: Chapter 5, "Memory Corruption."

Quote/Paraphrase: This foundational academic text on software security explains that the fundamental cause of buffer overflows is a lack of input validation and bounds checking. It details how validating the size of incoming data is a primary preventative measure that must be implemented by developers at the application level.

Question 6

An organization discovered a data breach that resulted in Pll being released to the public. During the lessons learned review, the panel identified discrepancies regarding who was responsible for external reporting, as well as the timing requirements. Which of the following actions would best address the reporting issue?
Options
A: Creating a playbook denoting specific SLAs and containment actions per incident type
B: Researching federal laws, regulatory compliance requirements, and organizational policies to document specific reporting SLAs
C: Defining which security incidents require external notifications and incident reporting in addition to internal stakeholders
D: Designating specific roles and responsibilities within the security team and stakeholders to streamline tasks
Show Answer
Correct Answer:
Researching federal laws, regulatory compliance requirements, and organizational policies to document specific reporting SLAs
Explanation
The core problem identified in the lessons learned review is a lack of clarity on who was responsible for external reporting and the timing requirements for a PII breach. These timing requirements (SLAs) are not arbitrary; they are dictated by legal and regulatory frameworks (e.g., GDPR, CCPA, HIPAA). Therefore, the most fundamental and effective action is to research these external mandates and internal policies. This research provides the authoritative basis for documenting the correct reporting timelines and subsequently assigning clear roles and responsibilities, directly addressing both discrepancies identified.
Why Incorrect Options are Wrong

A. This is too broad. While creating a playbook is useful, it doesn't address the root cause of where the reporting SLAs originate, and it incorrectly bundles containment with the reporting issue.

C. This action only defines which incidents require reporting, but the question's scenario already implies reporting was needed. It fails to address the specific problems of "who" and "when."

D. This addresses the "who" (roles) but completely ignores the "timing requirements," which was an equally critical part of the identified problem. Assigning a role without defining the deadline is an incomplete solution.

References

1. National Institute of Standards and Technology (NIST). (2012). Special Publication 800-61 Rev. 2, Computer Security Incident Handling Guide. Section 2.3.2, "Incident Response Policies," states that policy should define external reporting requirements to entities like government agencies and regulatory bodies. This necessitates researching those specific requirements to create a compliant policy.

2. ENISA (European Union Agency for Cybersecurity). (2022). Good practice guide on breach reporting. Section 4, "The notification process," details the legal timelines for reporting under regulations like the GDPR (e.g., "without undue delay and, where feasible, not later than 72 hours after having become aware of it"). This shows that reporting SLAs are derived directly from regulatory compliance research.

3. Romanosky, S. (2016). Examining the costs and causes of cyber incidents. Journal of Cybersecurity, 2(2), 121โ€“135. https://doi.org/10.1093/cybsec/tyw001. This academic journal discusses how incident response is heavily influenced by regulatory environments, stating, "state and federal laws require firms to notify individuals and government agencies of a breach," which reinforces the need to research these laws to define response procedures.

Question 7

Which of the following would an organization use to develop a business continuity plan?
Options
A: A diagram of all systems and interdependent applications
B: A repository for all the software used by the organization
C: A prioritized list of critical systems defined by executive leadership
D: A configuration management database in print at an off-site location
Show Answer
Correct Answer:
A prioritized list of critical systems defined by executive leadership
Explanation
The foundation of a business continuity plan (BCP) is the Business Impact Analysis (BIA). A BIA's primary output is the identification and prioritization of critical business functions and the information systems that support them. This prioritization, defined and approved by executive leadership, dictates the recovery strategies, recovery time objectives (RTO), and resource allocation detailed in the BCP. Without this prioritized list, an organization cannot effectively plan which operations to restore first to minimize impact during a disruption.
Why Incorrect Options are Wrong

A. A diagram of all systems and interdependent applications is a technical artifact for recovery but lacks the business-driven prioritization that guides the BCP.

B. A repository for all the software used by the organization is an element of disaster recovery, not the strategic input for creating the BCP.

D. A configuration management database (CMDB) provides technical details but does not define the business criticality or recovery priority of systems.

References

1. National Institute of Standards and Technology (NIST). (2010). Special Publication 800-34 Rev. 1, Contingency Planning Guide for Federal Information Systems. Section 2.2, Business Impact Analysis (BIA), p. 11. "The BIA is a key step in the contingency planning process... The BIA helps to identify and prioritize information systems and components critical to supporting the organizationโ€™s mission/business processes."

2. International Organization for Standardization. (2019). ISO 22301:2019 Security and resilience โ€” Business continuity management systems โ€” Requirements. Clause 8.2.2, "Business impact analysis and risk assessment." The standard mandates that an organization shall "identify the processes that support its products and services and the impact that a disruption can have on them" and "determine the priorities for the resumption of products and services and processes."

3. Carnegie Mellon University, Software Engineering Institute. (2016). CERT Resilience Management Model, Version 1.2 (CMU/SEI-2016-TR-010). Service Continuity (SVC) Process Area, SG 2, "Prepare for Service Continuity," SP 2.1, p. 137. This specific practice involves identifying and prioritizing "essential functions and assets" to ensure their continuity.

Question 8

A security analyst reviews the following results of a Nikto scan: Analyst+ CS0-003 exam question Which of the following should the security administrator investigate next?
Options
A: tiki
B: phpList
C: shtml.exe
D: sshome
Show Answer
Correct Answer:
shtml.exe
Explanation
The Nikto scan output flags /cgi-bin/shtml.exe as a script potentially vulnerable to cross-site scripting (XSS). The Common Gateway Interface (CGI) directory is designed to execute scripts and programs on the server. The presence of an executable file (.exe), especially one flagged with a potential vulnerability, represents a high-priority threat. Such a vulnerability could be leveraged for remote code execution (RCE), allowing an attacker to run arbitrary commands on the server. This risk is significantly more severe than the information disclosure or configuration weaknesses identified for the other options, making it the most critical item for immediate investigation.
Why Incorrect Options are Wrong

A. tiki: The finding for TikiWiki is a missing .htaccess file, which is a medium-risk configuration issue but less urgent than a potentially executable vulnerability.

B. phpList: The scan only identifies the presence of the application and its admin directory, which is a low-risk information disclosure finding.

D. sshome: The scan merely reports the installation of Sshome without noting any specific vulnerabilities, making it the lowest priority among the choices.

References

1. OWASP Web Security Testing Guide (WSTG) v4.2, Section 4.8.3 "Test for CGI Vulnerabilities (OTG-CONFIG-006)": This guide details the security risks associated with CGI. It states, "The cgi-bin directory is a special directory in the root of the web server that is used to house scripts that are to be executed by the web server... Misconfigured or legacy scripts could be abused by an attacker to gain control of the web server." The presence of shtml.exe directly aligns with this high-risk scenario.

2. NIST Special Publication 800-115, "Technical Guide to Information Security Testing and Assessment," Section 5.5.2, "Web Application Scanning": This document outlines the process of security testing, which includes analyzing scanner output. The methodology implicitly requires prioritizing findings based on potential impact. A vulnerability that could lead to code execution (like a flawed CGI executable) would be ranked higher than information disclosure or missing security headers.

3. Carnegie Mellon University, Software Engineering Institute (SEI), "Vulnerability Analysis," Courseware Module: University-level cybersecurity courseware emphasizes the principle of prioritizing vulnerabilities based on exploitability and impact. A server-side executable script in a cgi-bin directory presents a direct vector for server compromise, making it a critical finding that requires immediate attention over less severe configuration issues.

Question 9

A cybersecurity analyst is doing triage in a SIEM and notices that the time stamps between the firewall and the host under investigation are off by 43 minutes. Which of the following is the most likely scenario occurring with the time stamps?
Options
A: The NTP server is not configured on the host.
B: The cybersecurity analyst is looking at the wrong information.
C: The firewall is using UTC time.
D: The host with the logs is offline.
Show Answer
Correct Answer:
The NTP server is not configured on the host.
Explanation
A time discrepancy of 43 minutes is arbitrary and does not correspond to a standard time zone offset, which would typically be in full or half-hour increments. This strongly suggests that one of the devices is experiencing clock drift due to a lack of time synchronization. The Network Time Protocol (NTP) is the standard used to synchronize clocks across a network. In a typical corporate environment, critical infrastructure like a firewall is properly configured with NTP. Therefore, the most probable cause is that the host's clock is not synchronized with an NTP server, causing it to drift over time and creating a time gap when its logs are correlated with other sources in the SIEM.
Why Incorrect Options are Wrong

B. This suggests human error, but the question asks for the most likely scenario occurring with the time stamps, implying a technical cause for the discrepancy itself.

C. A difference between UTC and a local time zone would result in an offset of one or more full hours (or half-hours), not an arbitrary value like 43 minutes.

D. A host being offline would mean it stops sending logs, but it does not explain why the timestamps in the logs it already sent are out of sync.

---

References

1. National Institute of Standards and Technology (NIST). (2006). Guide to Computer Security Log Management (Special Publication 800-92).

Section 4.3.1, Time Stamps, Page 4-4: "If the clocks on hosts are not synchronized, it is impossible to have a consistent time reference... The Network Time Protocol (NTP) is typically used to perform time synchronization. Without proper time synchronization, it is impossible to determine the order in which events occurred from their log entries." This directly supports that a lack of synchronization (via NTP) causes time reference issues, which is the root of the problem in the scenario.

2. Zeltser, L. (2012). SANS Institute InfoSec Reading Room: Critical Log Review Checklist for Security Incidents.

Section: Time Synchronization, Page 3: "Confirm that all systems involved in the incident had their time synchronized to a common time source. If time was not synchronized, determine the time offset for each system." This highlights time synchronization as a critical first step in incident analysis and triage, reinforcing that its absence is a common and significant problem.

3. CompTIA. (2022). CompTIA Cybersecurity Analyst (CySA+) CS0-003 Exam Objectives.

Section 2.3, Page 10: This objective requires the candidate to "analyze data as part of security monitoring activities," specifically mentioning "Log - Timestamps." The scenario directly tests the analyst's ability to interpret and troubleshoot issues with log timestamps, a core competency for the exam. The discrepancy points to a failure in the underlying mechanism (NTP) responsible for maintaining accurate timestamps.

Question 10

Each time a vulnerability assessment team shares the regular report with other teams, inconsistencies regarding versions and patches in the existing infrastructure are discovered. Which of the following is the best solution to decrease the inconsistencies?
Options
A: Implementing credentialed scanning
B: Changing from a passive to an active scanning approach
C: Implementing a central place to manage IT assets
D: Performing agentless scanning
Show Answer
Correct Answer:
Implementing a central place to manage IT assets
Explanation
The core issue described is a lack of a consistent, shared understanding of the IT infrastructure across different teams, leading to disagreements when vulnerability reports are reviewed. Implementing a central place to manage IT assets, such as a Configuration Management Database (CMDB) or an asset inventory system, establishes a single, authoritative source of truth. This ensures that the vulnerability assessment team, system administrators, and other stakeholders are all working from the same baseline data regarding hardware, software versions, and patch status. This foundational step directly resolves the root cause of the inconsistencies between teams.
Why Incorrect Options are Wrong

A. Implementing credentialed scanning: While credentialed scanning provides more accurate data for the vulnerability report, it does not solve the underlying problem of different teams having inconsistent views of the asset inventory itself.

B. Changing from a passive to an active scanning approach: This changes the data collection method but does not address the foundational need for an agreed-upon asset inventory, which is the source of the inter-team inconsistencies.

D. Performing agentless scanning: This is a deployment choice for how scans are conducted. It does not inherently solve the problem of inconsistent asset information between different organizational teams.

---

References

1. National Institute of Standards and Technology (NIST). (2020). Security and Privacy Controls for Information Systems and Organizations (Special Publication 800-53, Revision 5).

Section: Control CM-8, Information System Component Inventory.

Content: This control mandates the development and maintenance of an inventory of system components. The discussion section states, "The inventory of information system components is essential for many other security controls, such as... flaw remediation (SI-2)... An accurate and up-to-date inventory is a prerequisite for an effective security program." This highlights that a central inventory is foundational for vulnerability management.

2. Fling, R., & Schmidt, D. C. (2009). An Integrated Framework for IT Asset and Security Configuration Management. Proceedings of the 42nd Hawaii International Conference on System Sciences.

Section: 3. An Integrated Framework for IT Asset and Security Configuration Management.

DOI: https://doi.org/10.1109/HICSS.2009.105

Content: The paper argues that effective security management is impossible without accurate asset management. It states, "Without an accurate and up-to-date inventory of IT assets, it is impossible to effectively manage their security configurations... Discrepancies between discovered and recorded information can then be identified and reconciled." This directly supports using a central asset repository to resolve inconsistencies.

3. Kim, D., & Solomon, M. G. (2021). CompTIA CySA+ Cybersecurity Analyst Certification All-in-One Exam Guide, Second Edition (Exam CS0-002). McGraw-Hill. (Note: While a commercial book, its principles are derived from and align with official CompTIA objectives and are widely used in academic settings as courseware. The principle is directly applicable to CS0-003).

Chapter 3: Vulnerability Management.

Content: The text emphasizes that the vulnerability management lifecycle begins with asset inventory. It explains that knowing what assets exist on the network is a prerequisite for scanning them and managing their vulnerabilities effectively. This establishes the central asset inventory as the starting point for reducing discrepancies.

Question 11

While configuring a SIEM for an organization, a security analyst is having difficulty correlating incidents across different systems. Which of the following should be checked first?
Options
A: If appropriate logging levels are set
B: NTP configuration on each system
C: Behavioral correlation settings
D: Data normalization rules
Show Answer
Correct Answer:
NTP configuration on each system
Explanation
Event correlation engines in a SIEM rely on identical or near-identical timestamps to align log records from heterogeneous hosts. If clocks drift, the same security event appears to occur at different times on different systems, preventing rule logic from matching the records into a single incident. Therefore, the very first item to verify is that every log-producing device is synchronized to a common, trusted time source through NTP (or another time-sync mechanism). Once time is consistent, logging levels, normalization, and behavioral rules can be evaluated reliably.
Why Incorrect Options are Wrong

A. Appropriate logging levels affect quantity/quality of data, not mis-aligned timestamps that break correlation.

C. Behavioral correlation settings depend on already-aligned events; wrong settings produce false negatives/positives, not timestamp mismatches.

D. Normalization converts diverse log formats into a common schema; it does not fix clock drift that prevents temporal matching.

References

1. Splunk Enterprise Admin Manual, โ€œConfigure NTP on all forwarders and indexersโ€, v9.1, p.47 (โ€œTime synchronization is prerequisite for accurate correlation and alertingโ€).

2. IBM QRadar SIEM Architecture and Deployment Guide, 7.4, Ch.4 โ€œSystem Timeโ€, pp.93-94 (โ€œLog sources must use synchronized NTP to ensure events can be correlated across systemsโ€).

3. RFC 5905: Mills et al., โ€œNetwork Time Protocol Version 4โ€, ยง1, p.3 (โ€œAccurate clock synchronization is essential for distributed monitoring and intrusion detection systemsโ€).

4. A. Katt, โ€œChallenges in Event Correlation for Security Monitoring,โ€ Computers & Security, 2020, 99:102028, ยง4.1 (doi:10.1016/j.cose.2020.102028) โ€“ discusses time synchronization as first requirement for SIEM correlation.

Question 12

An analyst is conducting routine vulnerability assessments on the company infrastructure. When performing these scans, a business-critical server crashes, and the cause is traced back to the vulnerability scanner. Which of the following is the cause of this issue?
Options
A: The scanner is running without an agent installed.
B: The scanner is running in active mode.
C: The scanner is segmented improperly.
D: The scanner is configured with a scanning window.
Show Answer
Correct Answer:
The scanner is running in active mode.
Explanation
An active vulnerability scan directly engages with a target system by sending probes, crafted packets, and various queries to identify vulnerabilities. This process is intrusive and can interact with services and the operating system in unexpected ways. For older, unstable, or business-critical systems with sensitive services, these probes can trigger latent bugs, cause memory leaks, or overwhelm resources, leading to a system crash. The crash is a direct consequence of the scanner's aggressive, interactive testing methodology inherent in active mode.
Why Incorrect Options are Wrong

A. The scanner is running without an agent installed.

The absence of an agent (agentless scanning) is not the direct cause; the intrusive method of the network-based scan is the cause. Agentless scans can be configured to be less intrusive.

C. The scanner is segmented improperly.

Improper network segmentation is an architectural flaw that might allow a scan to reach a critical server, but it does not explain why the scan itself caused the server to crash.

D. The scanner is configured with a scanning window.

A scanning window is a scheduling control used to minimize business impact. It dictates when a scan runs, not how it runs or the technical reason it might cause a system to fail.

References

1. National Institute of Standards and Technology (NIST). (2008). Special Publication 800-115, Technical Guide to Information Security Testing and Assessment.

Section 4.3, "Vulnerability Scanning," discusses the nature of these tools. It implicitly supports the answer by describing how scanners interact with target systems to find flaws. The guide notes that security testing, including scanning, carries a risk of "disruption of the services provided by the system," which directly aligns with an active scan crashing a server.

2. Scarfone, K., & Mell, P. (2008). NIST Special Publication 800-40 Revision 2, Guide to Enterprise Patch Management Technologies.

Section 3.2.1, "Active Scanners," states: "Active scanners can sometimes cause problems on hosts being scanned, such as crashing a host." This directly identifies active scanning as a potential cause for system crashes.

3. Du, W. (2019). Computer & Internet Security: A Hands-on Approach (2nd ed.). Syracuse University.

Chapter 20, "Vulnerability Assessment," describes how active vulnerability scanners work by sending specially crafted packets to probe for weaknesses. The text explains that these probes can sometimes cause the target services or even the entire operating system to crash due to bugs in the network stack or application code. This is a known risk of active scanning.

Question 13

An analyst is becoming overwhelmed with the number of events that need to be investigated for a timeline. Which of the following should the analyst focus on in order to move the incident forward?
Options
A: Impact
B: Vulnerability score
C: Mean time to detect
D: Isolation
Show Answer
Correct Answer:
Impact
Explanation
During an incident investigation, an analyst is often faced with a massive volume of event data. To effectively manage this and "move the incident forward," the analyst must prioritize. Focusing on the impact of the events is the most critical prioritization factor. Impact assessment helps determine the severity of the incident, the scope of the compromise, and the potential damage to the organization. By prioritizing events that indicate a higher impact (e.g., data exfiltration, privilege escalation on a critical server), the analyst can focus on the most significant threats first, leading to a more efficient and effective response.
Why Incorrect Options are Wrong

B. Vulnerability score: This is a pre-incident metric that quantifies potential weaknesses; it does not help in prioritizing events that have already occurred during an active incident.

C. Mean time to detect: This is a key performance indicator (KPI) used to measure the overall effectiveness of a security program, not a criterion for prioritizing evidence within a specific investigation.

D. Isolation: This is a containment strategy or response action. It is a step taken after an investigation has provided enough evidence to justify it, not a factor used to prioritize the analysis of events.

References

1. NIST Special Publication 800-61 Rev. 2, Computer Security Incident Handling Guide. Section 3.2.3, "Incident Prioritization," states that prioritization is critical and is generally based on factors like functional impact (e.g., services are down) and informational impact (e.g., data was exfiltrated). This directly supports focusing on impact to guide the investigation.

2. Carnegie Mellon University, Software Engineering Institute, Defining the Process for Handling Computer Security Incidents. In the document's discussion of the Triage phase (CMU/SEI-99-TR-020, Section 3.1), the process involves assessing the incident's priority based on its "technical severity and the business impact," which aligns with focusing on the overall impact to move the investigation forward.

Question 14

A security team is concerned about recent Layer 4 DDoS attacks against the company website. Which of the following controls would best mitigate the attacks?
Options
A: Block the attacks using firewall rules.
B: Deploy an IPS in the perimeter network.
C: Roll out a CDN.
D: Implement a load balancer.
Show Answer
Correct Answer:
Roll out a CDN.
Explanation
A Content Delivery Network (CDN) is the most effective control for mitigating Layer 4 (Transport Layer) DDoS attacks. A CDN consists of a globally distributed network of proxy servers that can absorb and filter massive volumes of malicious traffic at the network edge, far from the origin server. This distributed architecture is specifically designed to handle the high-bandwidth, volumetric nature of attacks like SYN or UDP floods by dispersing the traffic load and scrubbing it before it can impact the availability of the company's website.
Why Incorrect Options are Wrong

A. Block the attacks using firewall rules.

Firewall rules are ineffective against large-scale DDoS attacks, as the source IPs are numerous and often spoofed, and the firewall itself can be overwhelmed.

B. Deploy an IPS in the perimeter network.

An on-premise Intrusion Prevention System (IPS) can be a bottleneck and its own state tables and processing capacity can be exhausted by a volumetric DDoS attack.

D. Implement a load balancer.

A load balancer distributes all incoming traffic, including the malicious DDoS traffic, which would still overwhelm the backend servers it is distributing to.

References

1. AWS. (2021). AWS Best Practices for DDoS Resiliency. AWS Whitepaper. On page 6, in the section "Reduce the attack surface," it states, "By using Amazon CloudFront (a CDN) and Amazon Route 53, you can leverage the AWS edge network to serve content and resolve DNS queries... This helps to protect your web applications from network and transport layer DDoS attacks."

2. Gkounis, D., & Anagnostopoulos, M. (2022). A Survey on Distributed Denial of Service (DDoS) Attacks and Defense Mechanisms in the Internet of Things (IoT) and Cloud Environment. Journal of Sensor and Actuator Networks, 11(4), 71. In Section 4.2, "Cloud-Based Defense," the paper discusses how cloud providers and CDNs offer DDoS mitigation services that leverage their vast network capacity to absorb and filter attack traffic before it reaches the customer's infrastructure. (https://doi.org/10.3390/jsan11040071)

3. Stallings, W. (2017). Cryptography and Network Security: Principles and Practice (7th ed.). Pearson. In Chapter 20, "Denial-of-Service Attacks," the text describes defenses against flooding attacks, noting that a common commercial solution involves services (like those provided by CDNs) that use a large, distributed network of "attack-mitigation devices" to filter traffic.

Question 15

Which of the following is a useful tool for mapping, tracking, and mitigating identified threats and vulnerabilities with the likelihood and impact of occurrence?
Options
A: Risk register
B: Vulnerability assessment
C: Penetration test
D: Compliance report
Show Answer
Correct Answer:
Risk register
Explanation
A risk register is a foundational tool in risk management used to log and monitor identified risks. It serves as a central repository for mapping threats and vulnerabilities to potential impacts and the likelihood of their occurrence. This allows an organization to prioritize risks, assign ownership, and track the status of mitigation efforts over time. The register is a dynamic document that provides a comprehensive view of the organization's risk landscape, making it the correct tool for the described purpose.
Why Incorrect Options are Wrong

B. Vulnerability assessment: This is a process for identifying and quantifying vulnerabilities. It provides input for a risk register but is not the tracking and management tool itself.

C. Penetration test: This is a simulated attack to discover and exploit vulnerabilities. Its findings are a source of data for risk management, not the tool for tracking it.

D. Compliance report: This document assesses and reports on adherence to specific regulations or standards, which is a subset of overall risk, not the comprehensive management tool.

References

1. National Institute of Standards and Technology (NIST). (2012). Guide for Conducting Risk Assessments (NIST Special Publication 800-30, Revision 1).

Section 2.2.3, "Vulnerability Identification," and Section 2.2.4, "Threat Identification," describe the inputs. Section 2.3, "Risk Determination," discusses analyzing likelihood and impact. The output of this entire process is documented and tracked in a risk register to inform risk response (Section 2.4).

2. International Organization for Standardization. (2018). ISO/IEC 27005:2018 Information technology โ€” Security techniques โ€” Information security risk management.

Clause 8.3, "Risk Treatment," outlines the process of developing and implementing a risk treatment plan. The results of risk assessment and the decisions for treatment are recorded, which is the function of a risk register.

3. Carnegie Mellon University, Software Engineering Institute. (1996). Continuous Risk Management Guidebook (CMU/SEI-96-HB-001).

Chapter 4, "Risk Analysis," describes the process of evaluating risks based on their probability (likelihood) and impact. Chapter 5, "Risk Planning," details how this information is used to create mitigation plans, which are then tracked. This entire lifecycle is managed within a risk database, also known as a risk register.

Question 16

A security analyst has found a moderate-risk item in an organization's point-of-sale application. The organization is currently in a change freeze window and has decided that the risk is not high enough to correct at this time. Which of the following inhibitors to remediation does this scenario illustrate?
Options
A: Service-level agreement
B: Business process interruption
C: Degrading functionality
D: Proprietary system
Show Answer
Correct Answer:
Business process interruption
Explanation
The scenario describes a situation where a vulnerability fix is postponed due to a "change freeze window." A change freeze is a period during which changes to systems are restricted to ensure stability during critical operational periods (e.g., holiday season for a retailer). The primary purpose of a change freeze for a point-of-sale (POS) system is to prevent any potential disruption to the sales process. Therefore, the decision to delay the patch is a direct result of prioritizing the continuity of business operations over immediate remediation, which is an example of business process interruption acting as an inhibitor.
Why Incorrect Options are Wrong

A. Service-level agreement: An SLA defines service uptime and performance metrics. While avoiding an interruption helps meet an SLA, the direct inhibitor described by the change freeze is the prevention of the interruption itself.

C. Degrading functionality: This inhibitor applies when the patch or fix is known to cause performance issues or break features. The scenario does not state that the fix would degrade the application's functionality.

D. Proprietary system: This inhibitor occurs when the organization cannot modify the system because it lacks the source code or vendor support. The scenario implies the organization has control but has chosen to wait.

References

1. CompTIA Analyst (CS0-003) Exam Objectives. (2022). CompTIA.

Section 2.4: Explain the process of prioritizing vulnerabilities. This section explicitly lists "Inhibitors to remediation," which include "Business process interruption." The scenario provided is a classic example of this principle, where operational stability takes temporary precedence over patching.

2. NIST Special Publication 800-40 Revision 3. (2013). Guide to Enterprise Patch Management Technologies. National Institute of Standards and Technology.

Section 2.3.2, Patch Management Challenges, page 11: The document states, "Another challenge is that patching often causes system and application downtime, which may be unacceptable to the organization." This directly supports the concept that avoiding business interruption is a significant factor (inhibitor) in the remediation process.

3. Souppaya, M., & Scarfone, K. (2013). NIST Special Publication 800-53 Revision 4: Security and Privacy Controls for Federal Information Systems and Organizations. National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.800-53r4

Control SI-2, Flaw Remediation, page F-169: The discussion for this control emphasizes timely remediation but acknowledges operational constraints. Organizations must implement flaw remediation processes that "minimize the adverse impact on the organization's missions/business functions," which aligns with delaying a patch to avoid business process interruption.

Question 17

A company has a primary control in place to restrict access to a sensitive database. However, the company discovered an authentication vulnerability that could bypass this control. Which of the following is the best compensating control?
Options
A: Running regular penetration tests to identify and address new vulnerabilities
B: Conducting regular security awareness training of employees to prevent social engineering attacks
C: Deploying an additional layer of access controls to verify authorized individuals
D: Implementing intrusion detection software to alert security teams of unauthorized access attempts
Show Answer
Correct Answer:
Deploying an additional layer of access controls to verify authorized individuals
Explanation
A compensating control is a security measure put in place to mitigate the risk associated with a weakness in another control. In this scenario, the primary authentication control has a known vulnerability that allows it to be bypassed. The best compensating control is one that provides a similar level of protection. Deploying an additional, different layer of access control (such as multi-factor authentication or a secondary authorization check) directly compensates for the failure of the primary control by re-establishing a mechanism to verify the identity and authorization of individuals attempting to access the sensitive database.
Why Incorrect Options are Wrong

A. Running regular penetration tests is a detective and assessment process to identify vulnerabilities, not a control that actively compensates for a known, exploitable weakness in real-time.

B. Security awareness training is a preventive control aimed at human factors like social engineering, which does not directly address a technical authentication bypass vulnerability.

D. An Intrusion Detection System (IDS) is a detective control. It generates alerts on suspicious activity but does not prevent the access itself, failing to provide the preventive function of the failed primary control.

References

1. National Institute of Standards and Technology (NIST) Special Publication 800-53 Rev. 5, Security and Privacy Controls for Information Systems and Organizations. Appendix F, Glossary, page F-5, defines a compensating control as: "A control that is employed by an organization in lieu of a recommended control... and provides a similar level of protection for an information system." This supports option C, as an additional access control layer provides a similar level of protection to the failed primary one.

2. Purdue University, The Center for Education and Research in Information Assurance and Security (CERIAS), Introduction to Information Security - Lecture 5: Security Policies, Standards, and Controls. This courseware distinguishes between control types, explaining that compensating controls are alternatives used when a primary control is not feasible or is ineffective, which aligns with the scenario of a vulnerable primary control needing a backup.

3. CompTIA Analyst+ (CS0-003) Exam Objectives, Domain 1.0: Security Operations, Objective 1.4. This objective covers explaining the importance of vulnerability management, which includes implementing controls to mitigate identified vulnerabilities. The selection of an appropriate control type (compensating, in this case) is a key skill tested under this domain.

Question 18

A company is concerned with finding sensitive file storage locations that are open to the public. The current internal cloud network is flat. Which of the following is the best solution to secure the network?
Options
A: Implement segmentation with ACLs.
B: Configure logging and monitoring to the SIEM.
C: Deploy MFA to cloud storage locations.
D: Roll out an IDS.
Show Answer
Correct Answer:
Implement segmentation with ACLs.
Explanation
The core issue is a "flat" network architecture, which lacks internal controls to isolate resources. This design allows for broad, uncontrolled access, making it easy for sensitive storage to be inadvertently exposed. Implementing network segmentation divides the flat network into smaller, isolated zones (e.g., VLANs or subnets). Access Control Lists (ACLs) are then applied to the boundaries of these segments to enforce granular traffic rules. This approach directly remedies the architectural flaw by creating a defensible space around the sensitive file storage, allowing the company to explicitly deny public access while permitting legitimate internal traffic.
Why Incorrect Options are Wrong

B. Configure logging and monitoring to the SIEM.

This is a detective control that alerts on unauthorized access after it occurs, rather than a preventative measure that stops it from happening.

C. Deploy MFA to cloud storage locations.

MFA strengthens user authentication but is ineffective if the storage is misconfigured for public, unauthenticated access, which bypasses the authentication process entirely.

D. Roll out an IDS.

An Intrusion Detection System (IDS) is a detective control. It monitors and alerts on suspicious activity but does not actively block the traffic or fix the underlying network vulnerability.

References

1. National Institute of Standards and Technology (NIST) Special Publication 800-53 Revision 5, Security and Privacy Controls for Information Systems and Organizations.

Control SC-7, Boundary Protection: This control explicitly discusses the need to "monitor and control communications at the external boundary of the system and at key internal boundaries within the system." The discussion section further clarifies, "This control also addresses the segmentation of systems and system components." This directly supports using segmentation and access controls (like ACLs) to secure internal resources.

2. National Institute of Standards and Technology (NIST) Special Publication 800-125B, Secure Virtual Network Configuration for Virtual Machine (VM) Protection.

Section 4.2, Network Segmentation: "Network segmentation is the separation of a network into smaller, isolated networks... Segmentation can be used to isolate VMs from one another and from other resources, which can help to contain the impact of a security breach." This highlights segmentation as a primary security solution in cloud/virtual environments.

3. Purdue University, "Information Security Policy (VII.B.2)", Network Segmentation and Segregation.

Section 1, Standard: "Network segmentation and segregation will be used to control access to Sensitive and Restricted Data... Access Control Lists (ACLs) or other appropriate controls will be used to enforce the separation of network segments." This university policy document demonstrates the direct link between segmentation and ACLs as the standard solution for protecting sensitive data.

Question 19

A security analyst reviews the following Arachni scan results for a web application that stores PII data: Analyst+ CS0-003 exam question Which of the following should be remediated first?
Options
A: SQL injection
B: RFI
C: XSS
D: Code injection
Show Answer
Correct Answer:
SQL injection
Explanation
When prioritizing vulnerabilities, an analyst must consider the context and potential impact. The web application stores Personally Identifiable Information (PII), making data protection the primary concern. All listed vulnerabilities are high severity, but SQL injection (SQLi) presents the most direct and immediate threat to the database containing the PII. A successful SQLi attack allows an attacker to directly read, modify, or exfiltrate the entire dataset. While Remote File Inclusion (RFI) and Code Injection are also critical vulnerabilities that can lead to server compromise, SQL injection is the most specialized and direct attack vector for stealing the data itself, making it the highest priority for remediation in this scenario.
Why Incorrect Options are Wrong

B. RFI: Remote File Inclusion can lead to full server compromise, but SQL injection is a more direct and immediate threat specifically to the PII database.

C. XSS: Cross-Site Scripting primarily affects the user's browser (client-side) and is generally less critical than server-side vulnerabilities that can compromise all data at once.

D. Code injection: Similar to RFI, code injection can lead to server compromise, but SQL injection presents the most direct path for an attacker to exfiltrate the PII data.

References

1. OWASP Foundation. (2021). OWASP Top 10:2021. A03:2021-Injection. This standard lists injection flaws, including SQL injection, as a top security risk. The description notes that injection can result in "data loss or corruption" and that an application is vulnerable when hostile data is used directly in SQL queries, highlighting the direct threat to data.

2. Zeldovich, N. (2014). 6.858 Computer Systems Security, Fall 2014. Lecture 10: Web Security. Massachusetts Institute of Technology: MIT OpenCourseWare. Slide 23, "SQL Injection," explicitly states that the impact of this vulnerability is the ability to "Read/modify any data in database," confirming it as a direct threat to data stores like those containing PII.

3. National Institute of Standards and Technology. (2020). Security and Privacy Controls for Information Systems and Organizations (NIST Special Publication 800-53, Revision 5). Control SI-2, "Flaw Remediation," requires organizations to prioritize the remediation of flaws based on risk. In this scenario, the risk of mass PII exfiltration via a direct database attack (SQLi) represents the highest impact, thus demanding the highest priority. (DOI: https://doi.org/10.6028/NIST.SP.800-53r5)

Question 20

A systems administrator receives reports of an internet-accessible Linux server that is running very sluggishly. The administrator examines the server, sees a high amount of memory utilization, and suspects a DoS attack related to half-open TCP sessions consuming memory. Which of the following tools would best help to prove whether this server was experiencing this behavior?
Options
A: Nmap
B: TCPDump
C: SIEM
D: EDR
Show Answer
Correct Answer:
TCPDump
Explanation
The administrator suspects a Denial-of-Service (DoS) attack using half-open TCP sessions, commonly known as a SYN flood. To prove this, the administrator must analyze the network traffic at the packet level to observe a high volume of TCP SYN packets without the corresponding ACK packets that would complete the three-way handshake. TCPDump is a command-line packet analyzer that captures and displays network traffic in real-time. It allows the administrator to filter for and inspect TCP flags (e.g., SYN, ACK), directly verifying the presence of a SYN flood and confirming the hypothesis.
Why Incorrect Options are Wrong

A. Nmap: Nmap is a network scanner used for discovering hosts and services. It is an active tool for probing networks, not for passively analyzing incoming traffic to diagnose an ongoing attack.

C. SIEM: A Security Information and Event Management (SIEM) system aggregates and correlates log data from multiple sources. While it might receive alerts about a DoS attack, it does not directly capture or analyze raw packets on the affected server.

D. EDR: An Endpoint Detection and Response (EDR) tool monitors endpoint activities like processes and file system changes. It is not designed for deep packet inspection of network traffic to diagnose a network-level DoS attack.

References

1. Paxson, V. (1997). Detecting and analyzing network probes. University of California, Berkeley. In Section 3, "Real-time Intrusion Detection," the use of tools like tcpdump is discussed for monitoring network traffic for suspicious patterns, which is the core activity required to identify a SYN flood. The paper highlights the necessity of packet-level analysis for such tasks.

2. Kurose, J. F., & Ross, K. W. (2017). Computer Networking: A Top-Down Approach (7th ed.). Pearson. Chapter 3, Section 3.7, "TCP Connection Management," describes the three-way handshake. The text explains that a SYN flood attack exploits this process by sending SYN segments but not the final ACK, consuming server resources. Diagnosing this requires observing the state of these handshakes at the packet level, for which a packet sniffer like TCPDump is the standard tool.

3. Carnegie Mellon University, Software Engineering Institute. (2001). UNIX Intrusion Detection Checklist. Version 2.0. Section "Check for signs of a SYN flooding," suggests using tools like netstat to see the half-open connections and packet capture utilities (like tcpdump) to analyze the traffic causing the condition.

4. Scarfone, K., & Mell, P. (2012). Guide to Intrusion Detection and Prevention Systems (IDPS). (NIST Special Publication 800-94). National Institute of Standards and Technology. Section 2.3.2, "Denial of Service (DoS) Attacks," describes SYN floods. The document explains that network-based IDPS sensors operate by analyzing network packets (p. 12), the same principle used by tcpdump for manual analysis.

Question 21

An organization is conducting a pilot deployment of an e-commerce application. The application's source code is not available. Which of the following strategies should an analyst recommend to evaluate the security of the software?
Options
A: Static testing
B: Vulnerability testing
C: Dynamic testing
D: Penetration testing
Show Answer
Correct Answer:
Penetration testing
Explanation
The question specifies that the application's source code is unavailable, which necessitates a "black-box" testing approach. Penetration testing is a comprehensive security evaluation conducted on a running application from an external perspective, simulating a real-world attack without access to the source code. For a high-risk e-commerce application, a penetration test provides the most thorough assessment by not only using automated dynamic testing tools but also employing manual techniques to discover business logic flaws, complex vulnerabilities, and their potential impact. This holistic approach is the most suitable strategy to fully evaluate the software's security posture before a full deployment.
Why Incorrect Options are Wrong

A. Static testing: This method, also known as Static Application Security Testing (SAST), requires access to the application's source code for analysis, which is explicitly unavailable in this scenario.

B. Vulnerability testing: While a valid black-box technique, this typically refers to automated scanning for known vulnerabilities. It is less comprehensive than a penetration test, which includes manual exploitation and analysis.

C. Dynamic testing: This is a correct category of testing for this scenario (DAST), but penetration testing is a more specific and comprehensive strategy that utilizes dynamic testing techniques as part of a broader, goal-oriented assessment.

References

1. National Institute of Standards and Technology (NIST). (2008). Special Publication 800-115, Technical Guide to Information Security Testing and Assessment.

Section 2.3, "Security Assessment Methodologies," differentiates between testing types. It describes penetration testing as a process that "mimics the actions of an attacker" to find and exploit vulnerabilities (p. 8).

Section 5.4.1, "Static Code Analysis," explicitly states that this technique involves "analyzing an applicationโ€™s source code" (p. 58), making it unsuitable for this scenario.

Section 5.4.2, "Dynamic Code Analysis," describes testing a running application, which aligns with the scenario. However, penetration testing (Section 5.3) is presented as a more complete assessment engagement (p. 51).

2. OWASP Foundation. (2020). OWASP Web Security Testing Guide (WSTG), v4.2.

Section 4.1, "Introduction and Objectives," defines security testing methodologies. It contrasts automated vulnerability scanning with the depth of a manual penetration test, stating, "A penetration test is a goal-oriented process... It is not the same as a vulnerability assessment." This supports penetration testing as the more thorough evaluation strategy.

3. Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in Computing (5th ed.). Pearson Education.

Chapter 7, "Program Security," Section 7.4, "Targeted Evaluation," discusses different program analysis methods. It distinguishes static analysis (requiring code) from dynamic analysis (observing execution). It frames penetration testing as a form of active, dynamic analysis aimed at simulating attacks to find exploitable flaws, which is the most direct way to "evaluate the security" of a running system.

Question 22

Two employees in the finance department installed a freeware application that contained embedded malware. The network is robustly segmented based on areas of responsibility. These computers had critical sensitive information stored locally that needs to be recovered. The department manager advised all department employees to turn off their computers until the security team could be contacted about the issue. Which of the following is the first step the incident response staff members should take when they arrive?
Options
A: Turn on all systems, scan for infection, and back up data to a USB storage device.
B: Identify and remove the software installed on the impacted systems in the department.
C: Explain that malware cannot truly be removed and then reimage the devices.
D: Log on to the impacted systems with an administrator account that has privileges to perform backups.
E: Segment the entire department from the network and review each computer offline.
Show Answer
Correct Answer:
Segment the entire department from the network and review each computer offline.
Explanation
The first priority in an active malware incident is containment. Although the network is already segmented by department, the malware could still propagate within the finance segment. Therefore, the first logical step for the incident response team is to ensure complete network isolation for the affected department. This prevents any potential spread to other corporate assets. Reviewing each computer while it is offline is the safest method to begin the identification and analysis phase. This approach allows for forensic imaging and data recovery without the risk of activating the malware's network-based triggers or allowing it to communicate with a command-and-control server.
Why Incorrect Options are Wrong

A. Turning on systems is unsafe before containment is verified; it could activate the malware, causing data destruction or further spread.

B. This is an eradication step. It is premature to remove software before the incident is fully contained, identified, and analyzed.

C. This jumps to the recovery phase. Reimaging without first attempting to recover the required critical data would fail to meet the scenario's objectives.

D. Logging on to a compromised system, especially with administrative credentials, is extremely risky and could lead to credential theft or malware execution.

References

1. NIST Special Publication 800-61 Rev. 2, "Computer Security Incident Handling Guide": Section 3.3.2, "Containment," states, "Containment is the first step in the cycle after detection and analysis... Containment strategies can vary based on the type of incident. For example, the strategy for containing a network-based worm is to disconnect the affected hosts from the network." This supports isolating the department's segment as the primary action.

2. Carnegie Mellon University, Software Engineering Institute (SEI), "CSIRT Services": In the document outlining incident handling services, containment is described as a critical early step. It notes, "Containment includes actions to prevent the incident from spreading and causing further damage... This may involve isolating a network segment or disconnecting a system from the network." (CMU/SEI-2017-TR-010, Section 3.2.2).

3. SANS Institute, "The Six Steps of Incident Response": This widely accepted framework, based on the NIST model, places Containment immediately after Identification. The guide emphasizes that before any eradication or recovery, the incident must be contained to limit the damage. Isolating affected systems or network segments is a primary containment technique. (SANS Security Policy Templates, Incident Response, 2021).

Question 23

Which of the following actions would an analyst most likely perform after an incident has been investigated?
Options
A: Risk assessment
B: Root cause analysis
C: Incident response plan
D: Tabletop exercise
Show Answer
Correct Answer:
Root cause analysis
Explanation
After an incident has been investigated and resolved, the primary goal is to prevent its recurrence. This is achieved through the post-incident activity phase, a critical component of which is the root cause analysis (RCA). The RCA process delves deeper than the initial investigation (which focuses on what happened) to determine the fundamental reason why the incident occurred. The findings from the RCA are then used to create a lessons-learned report, update security controls, and revise the incident response plan. This action directly follows the investigation and recovery to improve the organization's security posture.
Why Incorrect Options are Wrong

A. Risk assessment: This is a proactive process performed during the preparation phase to identify and evaluate potential threats and vulnerabilities, not an immediate action after an incident investigation.

C. Incident response plan: This plan is created during the preparation phase. While it is updated based on lessons learned after an incident, the analytical action performed is the root cause analysis that informs those updates.

D. Tabletop exercise: This is a preparatory activity used to train staff and test the incident response plan's effectiveness before a real incident occurs, not a reactive measure taken after one has been investigated.

---

References

1. National Institute of Standards and Technology (NIST). (2012). Special Publication 800-61 Rev. 2: Computer Security Incident Handling Guide.

Section 3.4, "Post-Incident Activity," states, "Holding a 'lessons learned' meeting with all involved parties after a major incident is a helpful way to improve security measures and the incident handling process itself... The meeting provides a chance to... discuss what was done right, what was done wrong, and how to improve in the future." This process of determining how to improve is fundamentally based on analyzing the root cause of the incident.

2. CompTIA. (2022). CompTIA Cybersecurity Analyst (CySA+) CS0-003 Exam Objectives.

Domain 4.0, Objective 4.3, "Summarize post-incident activities," lists "Lessons learned report" and "Update incident response plan/playbooks." A root cause analysis is the prerequisite analytical step required to generate a meaningful lessons-learned report and determine what updates are necessary.

3. Carnegie Mellon University, Software Engineering Institute. (2016). Defining the 'Follow-Up' Phase of the Incident Management Process.

The document describes the "Follow-Up" or post-incident phase, stating its purpose is "analyzing the incident and its handling, with a view to improving the organization's incident management capability and preventing the incident from recurring." This directly aligns with the objective of a root cause analysis.

Question 24

An analyst has received an IPS event notification from the SIEM stating an IP address, which is known to be malicious, has attempted to exploit a zero-day vulnerability on several web servers. The exploit contained the following snippet:

/wp-json/trx_addons/V2/get/sc_layout?sc=wp_insert_user&role=administrator

Which of the following controls would work best to mitigate the attack represented by this snippet?

Options
A: Limit user creation to administrators only.
B: Limit layout creation to administrators only.
C: Set the directory trx_addons to read only for all users.
D: Set the directory v2 to read only for all users.
Show Answer
Correct Answer:
Limit user creation to administrators only.
Explanation
The provided exploit snippet, sc=wpinsertuser&role=administrator, indicates an attempt to leverage a vulnerability to execute the WordPress function wpinsertuser. The goal is to create a new user and assign them the highly privileged 'administrator' role. This is a classic example of a Broken Access Control vulnerability. The most direct and fundamental mitigating control is to enforce the security principle that only authenticated and authorized administrators can create new users, especially other administrators. This control directly opposes the malicious objective of the exploit.
Why Incorrect Options are Wrong

B. Limit layout creation to administrators only.

This is incorrect because the attack's payload is wpinsertuser, not layout creation. The sclayout component is merely the vulnerable API endpoint used to pass the malicious command.

C. Set the directory trxaddons to read only for all users.

This is ineffective. Setting a directory to read-only prevents modification of the files within it but does not prevent the web server from reading and executing the vulnerable scripts.

D. Set the directory v2 to read only for all users.

This is also ineffective for the same reason as option C. Filesystem read-only permissions do not prevent the execution of existing server-side code that contains the vulnerability.

---

References

1. OWASP Foundation. (2021). OWASP Top 10:2021 A01:2021 โ€“ Broken Access Control. OWASP. Retrieved from https://owasp.org/Top10/A012021-BrokenAccessControl/.

Reference Details: This vulnerability is a direct example of Broken Access Control. The mitigation guidance states, "Access control is only effective if enforced in trusted server-side code or server-less API, where the attacker cannot modify the access control check or metadata." This aligns with enforcing rules about who can create users at the application level.

2. WordPress.org. (n.d.). Hardening WordPress. WordPress Codex. Retrieved from https://wordpress.org/support/article/hardening-wordpress/#security-through-obscurity.

Reference Details: In the "Roles and Capabilities" section (implicitly covered under the principle of least privilege), WordPress documentation outlines that only users with the createusers capability (by default, only Administrators) should be able to create new users. The exploit bypasses this, and the mitigation is to ensure this control is properly enforced.

3. Zheng, X., & Zhang, Y. (2022). A Comprehensive Survey on WordPress Security. ACM Computing Surveys, 55(8), 1-37. https://doi.org/10.1145/3543823.

Reference Details: Section 3.1, "Privilege Escalation," discusses vulnerabilities where attackers gain higher privileges. It notes, "The most common way is to create a new administrator account... The fundamental solution is to strictly check the userโ€™s privilege before performing any sensitive operations." This academic source confirms that enforcing privilege checks for user creation is the correct mitigation strategy.

Question 25

A company recently removed administrator rights from all of its end user workstations. An analyst uses CVSSv3.1 exploitability metrics to prioritize the vulnerabilities for the workstations and produces the following information: Analyst+ CS0-003 exam question Which of the following vulnerabilities should be prioritized for remediation?
Options
A: nessie.explosion
B: vote.4p
C: sweet.bike
D: great.skills
Show Answer
Correct Answer:
nessie.explosion
Explanation
CVSS v3.1 exploitability is calculated as Exploitability = 8.22 ร— AV ร— AC ร— PR ร— UI. The highest-weight combination is AV:Network (0.85), AC:Low (0.77), PR:None (0.85), UI:None (0.85). โ€œnessie.explosionโ€ is the only listed vulnerability that already meets all of those values, so its exploitability sub-score (โ‰ˆ4.56) is greater than the others. Because administrator rights were stripped from workstations, any vulnerability that still needs elevated privileges (PR:High/Low) is now harder to exploit, further increasing the relative priority of โ€œnessie.explosion,โ€ which needs no privileges at all. Therefore, it should be remediated first.
Why Incorrect Options are Wrong

B. vote.4p โ€“ Requires elevated privileges; after admin removal, exploitability drops sharply (PR weight โ‰ค0.50).

C. sweet.bike โ€“ Attack Vector is Local/Physical; AV weight โ‰ค0.62, lowering exploitability.

D. great.skills โ€“ Requires user interaction (UI:Required, 0.62) and/or higher privileges, reducing exploitability compared with โ€œnessie.explosion.โ€

References

1. FIRST. โ€œCommon Vulnerability Scoring System v3.1 Specification,โ€ Sect. 2.1โ€“2.3, Tables 2 & 6 (weights for AV, AC, PR, UI). https://www.first.org/cvss/v3.1/specification-document (pp. 7-10).

2. Cichonski et al., NIST SP 800-115 Rev. 1 Draft, โ€œTechnical Guide to Information Security Testing,โ€ ยง3.3 (privilege removal impact on exploitability).

3. MIT OpenCourseWare, 6.858 โ€œComputer Systems Security,โ€ Lecture 6 notes, Principle of Least Privilege and its effect on vulnerability severity.

Question 26

A recent vulnerability scan resulted in an abnormally large number of critical and high findings that require patching. The SLA requires that the findings be remediated within a specific amount of time. Which of the following is the best approach to ensure all vulnerabilities are patched in accordance with the SLA?
Options
A: Integrate an IT service delivery ticketing system to track remediation and closure.
B: Create a compensating control item until the system can be fully patched.
C: Accept the risk and decommission current assets as end of life.
D: Request an exception and manually patch each system.
Show Answer
Correct Answer:
Integrate an IT service delivery ticketing system to track remediation and closure.
Explanation
The scenario describes a need to manage a large volume of remediation tasks against a strict Service Level Agreement (SLA). The most effective and scalable approach is to use a structured system for tracking these tasks. An IT service delivery ticketing system provides the necessary framework to create, assign, prioritize, track, and report on each vulnerability. This ensures that all required actions are documented, accountability is established, and progress toward meeting the SLA can be monitored and verified. This systematic process is crucial for managing a high volume of findings efficiently and preventing items from being overlooked.
Why Incorrect Options are Wrong

B. Create a compensating control item until the system can be fully patched.

This is a temporary risk mitigation strategy, not a remediation plan. It does not address the core requirement of patching the vulnerabilities as stipulated by the SLA.

C. Accept the risk and decommission current assets as end of life.

This is an extreme risk treatment option. Decommissioning a large number of assets is a major business decision and is not a practical or standard response to patching requirements.

D. Request an exception and manually patch each system.

Requesting an exception directly contradicts the goal of adhering to the SLA. While patching is manual, this option lacks a scalable management and tracking mechanism for a large number of findings.

---

References

1. National Institute of Standards and Technology (NIST) Special Publication 800-40r4 (Draft), Guide to Enterprise Patch Management Planning: Preventive Maintenance for Technology.

Reference: Section 3.3, "Patch Management Process," discusses the remediation phase. It emphasizes the need for a structured, repeatable process for applying and validating patches. The document states, "Organizations should have a documented process for patch installation... This process should include steps for scheduling, installation, verification, and documentation." A ticketing system is a primary tool for implementing and documenting such a process.

2. CompTIA Analyst+ (CS0-003) Exam Objectives.

Reference: Domain 2.0, "Vulnerability Management," Objective 2.3, "Explain the process of vulnerability management." This objective covers the lifecycle steps of Discovery, Prioritization, Remediation, and Reporting. A ticketing system is a key operational tool that facilitates the remediation and reporting phases by providing a formal mechanism to track work from initiation to closure, ensuring compliance with policies and SLAs.

3. Carnegie Mellon University, Software Engineering Institute, Vulnerability Management.

Reference: In discussions of operational vulnerability management, the process of remediation is detailed. The workflow often involves creating a "trouble ticket" or work order for each vulnerability that needs to be addressed. This ticket is then used to track all actions taken to remediate the vulnerability, ensuring a complete audit trail and verification of compliance with remediation timelines (SLAs). This is described in various CERT/CC publications and operational guides. For example, the concept is foundational in the "Operationalizing Threat Intelligence" courseware.

Question 27

A team of analysts is developing a new internal system that correlates information from a variety of sources analyzes that information, and then triggers notifications according to company policy Which of the following technologies was deployed?
Options
A: SIEM
B: SOAR
C: IPS
D: CERT
Show Answer
Correct Answer:
SIEM
Explanation
The system described performs the three core functions of a Security Information and Event Management (SIEM) platform. A SIEM is specifically designed to (1) aggregate and correlate data from a wide variety of sources (e.g., logs, network devices, servers), (2) analyze this correlated information against predefined rules to identify potential security incidents, and (3) trigger notifications or alerts based on the findings. The scenario perfectly aligns with the definition of a SIEM's role in a security operations environment, which is to centralize visibility, detect threats, and provide alerts for further investigation.
Why Incorrect Options are Wrong

B. SOAR: SOAR (Security Orchestration, Automation, and Response) platforms primarily focus on automating the response to alerts (often received from a SIEM), not the initial correlation and notification.

C. IPS: An Intrusion Prevention System (IPS) is a network security tool that inspects traffic and blocks threats in real-time; it does not correlate data from diverse, non-network sources.

D. CERT: A CERT (Computer Emergency Response Team) is a group of people responsible for responding to security incidents; it is a human team, not a technology.

References

1. National Institute of Standards and Technology (NIST). (2006). Special Publication 800-92, Guide to Computer Security Log Management.

Section 2.3, "Log Management Infrastructures," describes the functions of centralized logging systems, which are the foundation of a SIEM. It states, "Centralized log management provides a way to automate the log management process and to more easily correlate events that are recorded in different logs," directly supporting the correlation function mentioned in the question.

2. Tounsi, W., & Rais, H. (2018). Security orchestration, automation and response (SOAR) from a technical perspective. 2018 9th International Conference on anes and Communication Systems (ICACS), 1-6.

Section III.A, "SIEM," defines a SIEM as a system that "collects and analyzes security alerts, logs and other real-time and historical data from security devices, network infrastructure, systems and applications." This paper also clarifies that SOAR platforms are a "downstream security solution that is complementary to SIEM systems," confirming that the described functions precede SOAR's role. (DOI: https://doi.org/10.1109/ACS.2018.8586228)

3. Purdue University. (n.d.). Information Security Policy (VII.B.8).

Section "Procedures," Subsection "Security Information and Event Management (SIEM)," outlines the university's use of SIEM. It states the purpose is to "collect and aggregate log data... for the purposes of analysis and reporting on security-related events," which aligns with the system's functions in the question. This demonstrates the real-world application and definition of SIEM in an institutional policy context.

Question 28

A security analyst received an alert regarding multiple successful MFA log-ins for a particular user When reviewing the authentication logs the analyst sees the following: Which of the following are most likely occurring, based on the MFA logs? (Select two).
Options
A: Dictionary attack
B: Push phishing
C: impossible geo-velocity
D: Subscriber identity module swapping
E: Rogue access point
F: Password spray
Show Answer
Correct Answer:
Push phishing, impossible geo-velocity
Explanation
The provided logs indicate two primary security events. First, the successful logins from New York, London, and Tokyo for the same user (j.doe) within a 15-second window represent a physical impossibility. This anomaly is known as impossible geo-velocity or impossible travel, which is a strong indicator of a compromised account being accessed from multiple locations by an attacker. Second, the multiple successful Multi-Factor Authentication (MFA) events suggest the attacker, having already obtained the user's password, is repeatedly sending MFA requests to the user's legitimate device. This tactic, known as push phishing (or MFA fatigue), aims to annoy or trick the user into approving the authentication request, thereby granting the attacker access.
Why Incorrect Options are Wrong

A. Dictionary attack: This is a password-guessing attack. The logs show successful MFA, indicating the password was already compromised, not being actively guessed.

D. Subscriber identity module swapping: While a method to bypass SMS-based MFA, it does not inherently explain the multiple, rapid, geographically dispersed successful logins shown in the logs.

E. Rogue access point: This is a localized network attack and cannot explain simultaneous successful logins from three different continents.

F. Password spray: This attack involves trying one password against many accounts, whereas the logs show multiple successful logins for a single account.

References

1. Microsoft Corporation. (2023). What is risk? - Microsoft Entra. Microsoft Learn. In the "Risk detections in Microsoft Entra ID Protection" section, "Impossible travel" is defined as a risk detection type that flags sign-ins from geographically distant locations occurring in a time period shorter than the time it would have taken the user to travel between them. This directly corresponds to the log evidence. (Reference: learn.microsoft.com/en-us/entra/id-protection/concept-identity-protection-risks, Section: "Risk detections in Microsoft Entra ID Protection").

2. National Institute of Standards and Technology (NIST). (2017). NIST Special Publication 800-63B: Digital Identity Guidelines, Authentication and Lifecycle Management. In Section 5.1.1, "Authentication Process," the document states that verifiers should analyze the context of authentication requests for risk signals, which includes anomalous locations or velocities. (DOI: https://doi.org/10.6028/NIST.SP.800-63b, Page 17, Section 5.1.1).

3. Carnegie Mellon University. (2022). Duo MFA Push Harassment. Information Security Office. This university documentation describes MFA Push Harassment (also known as MFA Fatigue or Push Phishing) as a technique where attackers "send a flood of push notifications to a userโ€™s mobile device, hoping the user will accept a prompt to allow the attacker to gain access." This matches the likely method used to achieve the multiple successful MFA logins. (Reference: cmu.edu/iso/news/2022/duo-mfa-push-harassment.html).

Question 29

An attacker recently gained unauthorized access to a financial institution's database, which contains confidential information. The attacker exfiltrated a large amount of data before being detected and blocked. A security analyst needs to complete a root cause analysis to determine how the attacker was able to gain access. Which of the following should the analyst perform first?
Options
A: Document the incident and any findings related to the attack for future reference.
B: Interview employees responsible for managing the affected systems.
C: Review the log files that record all events related to client applications and user access.
D: Identify the immediate actions that need to be taken to contain the incident and minimize damage.
Show Answer
Correct Answer:
Review the log files that record all events related to client applications and user access.
Explanation
The first step in a technical root cause analysis (RCA) is to reconstruct the sequence of events based on factual evidence. Log files, which record events such as user access, application activity, and system changes, provide the primary source of objective data for this purpose. By reviewing these logs, an analyst can establish a timeline, identify the initial point of compromise, and trace the attacker's actions through the system. This foundational analysis of empirical data is essential before interviewing personnel or documenting conclusions, and it occurs after initial containment has been achieved.
Why Incorrect Options are Wrong

A. Documentation is a continuous activity that records findings as they are discovered; it is not the initial analytical step to find the root cause.

B. Interviews provide valuable context but should be conducted after an initial review of technical evidence (logs) to ask more targeted and informed questions.

D. The scenario states the attacker was "detected and blocked," implying containment actions have already been taken. RCA is a post-containment activity focused on "how" it happened.

References

1. National Institute of Standards and Technology (NIST). (2012). Special Publication 800-61 Rev. 2, Computer Security Incident Handling Guide.

Section 3.2, "Detection and Analysis," details the process of analyzing data from various sources, with logs being a primary component, to understand an incident. This analysis is the prerequisite for the post-incident activities, including root cause determination.

Section 3.4, "Post-Incident Activity," explains that a key part of this phase is to perform a root cause analysis using the data collected during the investigation to prevent future occurrences.

2. Kent, K., & Souppaya, M. (2006). NIST Special Publication 800-86, Guide to Integrating Forensic Techniques into Incident Response.

Section 3.2, "Collection," emphasizes that the initial step in a forensic investigation (a key part of RCA) is collecting volatile and non-volatile data, which includes system, security, and application logs, to build a timeline of events.

3. Carnegie Mellon University, Software Engineering Institute. (2017). Defining the Required Set of Digital Forensic Capabilities for a Security Operations Center (SOC). (CMU/SEI-2017-TN-009).

Section 3.2, "Analysis," describes the analysis phase where investigators examine collected artifacts, such as log files and disk images, to "determine the root cause of an incident" (p. 10). This confirms that log analysis is a fundamental step in the RCA process.

Question 30

A security analyst is responding to an indent that involves a malicious attack on a network. Data closet. Which of the following best explains how are analyst should properly document the incident?
Options
A: Back up the configuration file for alt network devices
B: Record and validate each connection
C: Create a full diagram of the network infrastructure
D: Take photos of the impacted items
Show Answer
Correct Answer:
Take photos of the impacted items
Explanation
When responding to an incident involving a physical location, such as a data closet, the first priority in documentation is to preserve the state of the scene exactly as it was found. Taking photographs of the impacted items is a standard forensic procedure that creates an accurate, time-stamped visual record before any evidence is handled, moved, or altered. This non-intrusive method captures the physical connections, device statuses, and any signs of tampering, which is critical for subsequent investigation and analysis. This action establishes a baseline of the scene's condition at the time of discovery.
Why Incorrect Options are Wrong

A. Back up the configuration file for all network devices: This is a containment or data preservation step, not the initial documentation of the physical scene. It should be performed after the scene is documented.

B. Record and validate each connection: This is a more intrusive and time-consuming process that could alter the state of the evidence. It should be done after initial, non-intrusive documentation like photography.

C. Create a full diagram of the network infrastructure: A network diagram is typically a pre-existing document. Creating one during an incident is not the primary method for documenting the immediate, physical state of the impacted items.

References

1. National Institute of Standards and Technology (NIST) Special Publication 800-86, Guide to Integrating Forensic Techniques into Incident Response. Section 3.2, "Collection," emphasizes the need for a standardized process for gathering data, which includes "documenting where, when, and how the evidence was collected." For a physical scene, photography is a primary method for documenting the "where" and "how" before evidence is physically handled.

2. National Institute of Standards and Technology (NIST) Special Publication 800-61 Rev. 2, Computer Security Incident Handling Guide. Section 3.3.2, "Evidence Gathering and Handling," discusses the importance of preserving evidence and maintaining a chain of custody. Documenting the original state of the evidence, which includes the physical environment, is a foundational step in this process.

3. Carnegie Mellon University, Software Engineering Institute, Best Practices in Digital Evidence Collection. Document CMU/SEI-2006-TN-029. Section 3.1, "Secure the Scene," states, "The first responder should photograph or videotape the entire scene before touching or moving anything." This highlights photography as a critical first step in documenting a physical scene related to a digital incident.

Question 31

While reviewing the web server logs a security analyst notices the following snippet

..\../..\../boot.ini

Which of the following is being attempted?

Options
A: Directory traversal
B: Remote file inclusion
C: Cross-site scripting
D: Remote code execution
E: Enumeration of/etc/pasawd
Show Answer
Correct Answer:
Directory traversal
Explanation
The log snippet ..\../..\../boot.ini is a classic indicator of a directory traversal (or path traversal) attack. The ../ sequence is a command used in file systems to move up to the parent directory. By chaining these sequences, an attacker attempts to break out of the web application's root directory and access sensitive system files located elsewhere on the server. In this case, the target is boot.ini, a system configuration file from older Windows operating systems, which could reveal information about the server's setup.
Why Incorrect Options are Wrong

B. Remote file inclusion: This attack involves tricking an application into including a file from an external URL, not navigating the local file system with ../.

C. Cross-site scripting: This is an injection attack that involves inserting malicious scripts (e.g., JavaScript) into web pages, which is not present in the log snippet.

D. Remote code execution: While a successful file access could potentially lead to RCE, the log itself only shows an attempt to read a file, not execute arbitrary commands.

E. Enumeration of /etc/passwd: This is a specific goal of directory traversal on Linux/Unix systems. The log targets boot.ini, which is a Windows-specific file.

---

References

1. MITRE. (2023). CWE-22: Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal'). Common Weakness Enumeration. Retrieved from https://cwe.mitre.org/data/definitions/22.html. The CWE-22 entry describes this exact weakness, where an attacker uses sequences like ../ to access files outside of the intended directory, providing examples such as .../../.../../etc/passwd.

2. OWASP Foundation. (n.d.). Path Traversal. OWASP Cheat Sheet Series. Retrieved from https://cheatsheetseries.owasp.org/cheatsheets/PathTraversalCheatSheet.html. This official guide explicitly defines the attack and states, "The ../ characters are a file system directive that means 'go up one directory'," showing how it is used to access restricted files.

3. Vick, P. (2014). Web Application Security. Courseware, CS 461, University of Illinois Urbana-Champaign. Retrieved from https://courses.engr.illinois.edu/cs461/sp2014/lectures/20-webappsecurity.pdf. Slide 22 ("Path Traversal") provides a clear example: GET /getimage?name=../../../../etc/passwd, which uses the same ../ technique shown in the question to access a system file.

Question 32

A manufacturer has hired a third-party consultant to assess the security of an OT network that includes both fragile and legacy equipment. Which of the following must be considered to ensure the consultant does no harm to operations?

Options
A: Employing Nmap Scripting Engine scanning techniques
B: Preserving the state of PLC ladder logic prior to scanning
C: Using passive instead of active vulnerability scans
D: Running scans during off-peak manufacturing hours
Show Answer
Correct Answer:
Using passive instead of active vulnerability scans
Explanation
Operational Technology (OT) networks, particularly those with fragile and legacy equipment, are highly susceptible to disruption from non-standard network traffic. Active vulnerability scans send probes and packets that these systems are not designed to handle, which can cause them to crash, enter a fault state, or behave unpredictably, leading to operational failure. Passive scanning is a non-intrusive method that monitors existing network traffic (e.g., via a SPAN port) to identify assets, protocols, and vulnerabilities without sending any traffic to the devices. This approach is the standard for initial assessments in sensitive OT environments to ensure the "do no harm" principle is upheld.
Why Incorrect Options are Wrong

A. Employing Nmap Scripting Engine scanning techniques: Nmap is an active scanning tool, and its scripting engine can be particularly aggressive. This is highly likely to cause instability or failure in fragile OT systems.

B. Preserving the state of PLC ladder logic prior to scanning: This is a recovery action, not a preventative one. The primary goal is to avoid causing harm in the first place, which this option does not address.

D. Running scans during off-peak manufacturing hours: This mitigates the impact of a potential failure but does not prevent the scan from causing the failure. Many OT systems operate 24/7, and even a brief disruption can be catastrophic.

---

References

1. National Institute of Standards and Technology (NIST). (2015). Guide to Industrial Control Systems (ICS) Security (NIST Special Publication 800-82, Revision 2).

Section 5.3.2.2, Vulnerability Scanning, Page 101: "Active scanning on an operational ICS network is often discouraged because it can send unforeseen traffic to the control devices, which can cause them to fail... Passive vulnerability scanning tools are available that can be used on an operational ICS network without the risk of interfering with the control devices."

2. Cybersecurity and Infrastructure Security Agency (CISA). (2018). Recommended Practice: Improving Industrial Control System Cybersecurity with Defense-in-Depth Strategies.

Section 3.3, Identify and Understand Vulnerabilities, Page 11: The document emphasizes the need for careful planning when performing assessments on live ICS environments, stating, "Passive network monitoring and asset discovery tools can be used to identify vulnerabilities without impacting the operational network."

3. Krotofil, M., & Larsen, J. (2016). Passive approach to assessing security of industrial control systems. In 2016 24th Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (PDP) (pp. 631-635). IEEE.

Abstract & Section II.A: The paper highlights the risks of active scanning in ICS, noting that "even a simple port scan can lead to a denial of service." It advocates for passive analysis as a safe alternative for discovering network topology and identifying potential vulnerabilities without interacting with sensitive devices. DOI: 10.1109/PDP.2016.61

Question 33

A cybersecurity analyst is recording the following details * ID * Name * Description * Classification of information * Responsible party. In which of the following documents is the analyst recording this information?

Options
A: Risk register
B: Change control documentation
C: Incident response playbook
D: Incident response plan
Show Answer
Correct Answer:
Risk register
Explanation
The analyst is populating a risk register. A risk register is a document used in risk management to track identified risks. It typically includes a unique identifier (ID), a name and description of the risk, an assessment of the risk's impact (which is directly related to the classification of the information or asset affected), and the assignment of a responsible party or risk owner who is accountable for monitoring and treating the risk. The combination of these specific fields is characteristic of a risk register, which serves as a central repository for risk-related information.
Why Incorrect Options are Wrong

B. Change control documentation: This tracks the lifecycle of changes to IT systems, focusing on the change itself, its justification, and implementation plan, not on cataloging organizational risks.

C. Incident response playbook: This is a tactical, step-by-step guide for handling a specific type of security incident. It contains procedures, not a log of risks with classifications and owners.

D. Incident response plan: This is a high-level, strategic document outlining the overall framework, roles, and responsibilities for handling incidents, not a detailed log of individual risks.

References

1. National Institute of Standards and Technology (NIST). (2012). Guide for Conducting Risk Assessments (NIST Special Publication 800-30, Revision 1).

Section 3.4, "Risk Response," and Appendix H, "Risk Register Example": These sections describe the process of documenting risks. The example risk register in Appendix H includes fields for Risk ID, Risk Description, Impact, and Risk Owner (Responsible Party), which directly correspond to the information being recorded by the analyst in the question.

2. National Institute of Standards and Technology (NIST). (2018). Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy (NIST Special Publication 800-37, Revision 2).

Section 2.6, "MONITOR Step": This section discusses the continuous monitoring of risks. It states, "Risk monitoring is part of the overall organizational risk management process and involves, for example... tracking identified risks... The results of risk monitoring are used to update the risk register." This confirms the role of the risk register in tracking identified risks and their attributes.

3. The MITRE Corporation. (2021). Cyber Resiliency Engineering Framework (NIST Special Publication 800-160, Volume 2, Revision 1).

Section 3.3.2, "Risk Assessment": This section discusses identifying and documenting risks. It notes that the output of this process is a "list of risks to be entered into the risk register," reinforcing that the described activity is part of creating or maintaining this specific document.

Question 34

A threat hunter seeks to identify new persistence mechanisms installed in an organization's environment. In collecting scheduled tasks from all enterprise workstations, the following host details are aggregated: Which of the following actions should the hunter perform first based on the details above?
Options
A: Acquire a copy of taskhw.exe from the impacted host
B: Scan the enterprise to identify other systems with taskhw.exe present
C: Perform a public search for malware reports on taskhw.exe.
D: Change the account that runs the -caskhw. exe scheduled task
Show Answer
Correct Answer:
Perform a public search for malware reports on taskhw.exe.
Explanation
The scheduled task exhibits multiple indicators of compromise: execution from a non-standard, user-writable directory (C:\Users\Public), a generic filename (taskhw.exe) masquerading as a legitimate task, and execution with the highest privileges (SYSTEM). The most logical and efficient first step for a threat hunter is to perform open-source intelligence (OSINT) gathering. A public search for the filename and associated indicators can rapidly determine if this is a known malware family or tool. This initial triage provides immediate context, helping to confirm maliciousness and guiding all subsequent investigative and response actions, such as scoping the incident or performing deeper forensic analysis.
Why Incorrect Options are Wrong

A. Acquire a copy of taskhw.exe from the impacted host: This is a forensic analysis step that should occur after initial triage confirms the file is likely malicious, not as the very first action.

B. Scan the enterprise to identify other systems with taskhw.exe present: This scoping action is premature. The analyst should first gather intelligence to confirm the file is malicious before initiating a resource-intensive enterprise-wide scan.

D. Change the account that runs the -caskhw. exe scheduled task: This is a containment or remediation action. Taking such steps without first confirming the nature of the threat is improper procedure and could have unintended consequences.

References

1. NIST Special Publication 800-61 Rev. 2, "Computer Security Incident Handling Guide": Section 3.2.2, "Analysis," states that after detecting an indicator, an analyst should investigate to determine its nature. The guide mentions that this process may involve "using search engines to look for information related to the indicators" (p. 22). This supports performing a public search as an initial analysis step.

2. MITRE ATT&CKยฎ Framework: The scenario describes Technique T1053.005, "Scheduled Task/Job: Scheduled Task." The detection guidance for this technique involves identifying tasks with unusual properties, such as executing from uncommon directories. The logical step following detection of such an anomaly is analysis, which begins with gathering intelligence on the observed artifacts.

3. Purdue University, "Incident Response" Courseware (CS49000-IR): Incident response methodologies taught in academic settings emphasize a phased approach. The initial analysis phase, following detection, focuses on validating and triaging the alert. This includes researching indicators of compromise (like file names and paths) using external threat intelligence sources before proceeding to deeper analysis or containment. This aligns with performing a public search first.

Question 35

An analyst is designing a message system for a bank. The analyst wants to include a feature that allows the recipient of a message to prove to a third party that the message came from the sender Which of the following information security goals is the analyst most likely trying to achieve?
Options
A: Non-repudiation
B: Authentication
C: Authorization
D: Integrity
Show Answer
Correct Answer:
Non-repudiation
Explanation
The core requirement is for a recipient to be able to prove to a third party that a message originated from a specific sender. This is the definition of non-repudiation. It provides assurance that the sender cannot later deny having sent the message. This is typically achieved using digital signatures, where the sender's private key is used to sign the message, creating a unique, verifiable proof of origin that can be presented to external entities like auditors or courts. The other concepts, while related to security, do not address this specific requirement of third-party proof.
Why Incorrect Options are Wrong

B. Authentication: Authentication verifies a user's identity to a system (e.g., via a password), but it does not inherently provide undeniable, transferable proof of a specific action for a third party.

C. Authorization: Authorization defines the permissions an authenticated user has (e.g., read or write access). It is unrelated to proving the origin of a message.

D. Integrity: Integrity ensures that a message has not been altered. While digital signatures also provide integrity, the primary goal described is proving the sender's identity, not the message's unchanged state.

References

1. National Institute of Standards and Technology (NIST). (2021). Security and Privacy Controls for Information Systems and Organizations (SP 800-53, Rev. 5). In Appendix F, Security and Privacy Control Baselines, the control family for Identification and Authentication (IA) is distinct from controls that support non-repudiation, which are often implemented cryptographically. The NIST Glossary defines non-repudiation as: "Assurance that the sender of information is provided with proof of delivery and the recipient is provided with proof of the senderโ€™s identity, so neither can later deny having processed the information."

Source: NIST Computer Security Resource Center (CSRC) Glossary, entry for "non-repudiation".

2. Shirey, R. (2007). Internet Security Glossary, Version 2 (RFC 4949). The Internet Engineering Task Force (IETF). This document defines non-repudiation as a security service that provides proof of origin or proof of delivery.

Source: Section 2, "Definitions," page 207, states: "non-repudiation service: A security service that provides proof of the origin of data or proof of the delivery of data."

3. Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in Computing (5th ed.). Pearson Education. In Chapter 1, "Is There a Security Problem in Computing?", the text distinguishes between fundamental security goals.

Source: Section 1.2, "Basic Components of Security," page 10, explains that non-repudiation is the "inability to deny a deed," which is distinct from authentication (verifying identity) and integrity (ensuring data is unaltered).

4. Kurose, J. F., & Ross, K. W. (2017). Computer Networking: A Top-Down Approach (7th ed.). Pearson Education. The textbook discusses cryptographic principles for network security.

Source: Chapter 8, "Security in Computer Networks," Section 8.2, "Principles of Cryptography," explains that digital signatures provide non-repudiation because only the holder of the private key could have created the signature, offering proof to a third party.

Shopping Cart
Scroll to Top

FLASH OFFER

Days
Hours
Minutes
Seconds

avail $6 DISCOUNT on YOUR PURCHASE