COMPTIA Cloud+ CV0-004 Exam Questions 2025

Updated:

Get ready for the CompTIA Cloud+ (CV0-004) certification with our expertly crafted practice questions. Each question is aligned with the latest Cloud+ exam objectives and reviewed by cloud professionals to ensure accuracy and relevance. Youโ€™ll receive dependable answers, detailed explanations that cover both correct and incorrect options, and access to our realistic online exam simulator. Try free sample questions today and see why IT professionals trust Cert Empire to achieve their Cloud+ certification success.

Exam Questions

Question 1

A cloud engineer is in charge of deploying a platform in an laaS public cloud. The application tracks the state using session cookies, and there are no affinity restrictions. Which of the following will help the engineer reduce monthly expenses and allow the application to provide the service?
Options
A: Resource metering
B: Reserved resources
C: Dedicated host
D: Pay-as-you-go model
Show Answer
Correct Answer:
Pay-as-you-go model
Explanation
The application's architecture, which uses session cookies for state and has no affinity restrictions, makes it stateless from the server's perspective. This design is ideal for horizontal scaling and elasticity, allowing infrastructure to be scaled up or down based on real-time demand. The pay-as-you-go model directly aligns with this capability by charging only for the resources consumed. This prevents over-provisioning for peak capacity and ensures that the organization does not pay for idle resources during periods of low traffic, thereby minimizing monthly expenses while maintaining service availability.
Why Incorrect Options are Wrong

A. Resource metering: This is the process of measuring resource consumption. While it enables the pay-as-you-go model, it is not the cost-saving strategy itself.

B. Reserved resources: This model offers discounts for a long-term commitment (e.g., 1-3 years) and is best suited for stable, predictable workloads, not necessarily for leveraging elasticity to reduce costs.

C. Dedicated host: This provides a physical server for a single tenant. It is the most expensive option and is typically used for compliance or software licensing, directly contradicting the goal of reducing expenses.

---

References

1. National Institute of Standards and Technology (NIST) Special Publication 800-145, "The NIST Definition of Cloud Computing":

Section 2, Page 2: Defines "On-demand self-service" and "Measured service" as essential characteristics of cloud computing. The pay-as-you-go model is a direct implementation of these principles, allowing consumers to provision resources as needed and pay only for what they use. The stateless application in the scenario is perfectly suited to leverage this on-demand nature for cost efficiency.

2. Amazon Web Services (AWS) Documentation, "Amazon EC2 Pricing":

On-Demand Pricing Section: "With On-Demand instances, you pay for compute capacity by the hour or the second with no long-term commitments... This frees you from the costs and complexities of planning, purchasing, and maintaining hardware... [It is recommended for] applications with short-term, spiky, or unpredictable workloads that cannot be interrupted." This directly supports the use of a pay-as-you-go model for an application designed for elasticity to reduce costs.

Reserved Instances & Dedicated Hosts Sections: The documentation contrasts this with Reserved Instances, which are for "applications with steady state or predictable usage," and Dedicated Hosts, which are physical servers that "can help you reduce costs by allowing you to use your existing server-bound software licenses." These use cases do not align with the scenario's primary goal of cost reduction through elasticity.

3. Microsoft Azure Documentation, "Virtual Machines pricing":

Pay as you go Section: Describes this model as ideal for "running applications with short-term or unpredictable workloads where there is no long-term commitment." This aligns with the scenario where an engineer wants to leverage the cloud's elasticity to match cost to actual usage, thus reducing waste.

Reserved Virtual Machine Instances Section: Explains that reservations are for workloads with "predictable, consistent traffic" and require a "one-year or three-year term," which is less flexible than pay-as-you-go.

4. Armbrust, M., et al. (2009). "Above the Clouds: A Berkeley View of Cloud Computing." University of California, Berkeley, Technical Report No. UCB/EECS-2009-28.

Section 3.1, Economic Advantages: The paper states, "Cloud Computing enables a pay-as-you-go model, where you pay only for what you use... An attraction of Cloud Computing is that computing resources can be rapidly provisioned and de-provisioned on a fine-grained basis... allowing clouds to offer an 'infinite' pool of resources in a pay-as-you-go manner." This academic source establishes the fundamental economic benefit of the pay-as-you-go model in leveraging elasticity, which is the core of the question.

Question 2

A systems administrator is provisioning VMs according to the following requirements: ยท A VM instance needs to be present in at least two data centers. . During replication, the application hosted on the VM tolerates a maximum latency of one second. ยท When a VM is unavailable, failover must be immediate. Which of the following replication methods will best meet these requirements?
Options
A: Snapshot
B: Transactional
C: Live
D: Point-in-time
Show Answer
Correct Answer:
Live
Explanation
The requirements for immediate failover and a maximum replication latency of one second necessitate a continuous, near-real-time data protection strategy. Live replication, often implemented as synchronous or near-synchronous replication, continuously transmits data changes from the primary VM to a replica in a secondary data center as they occur. This method ensures the replica is always in a consistent and up-to-date state, enabling an immediate and automated failover with a Recovery Point Objective (RPO) of near-zero. This directly meets the stringent availability and low-latency demands described in the scenario for mission-critical applications.
Why Incorrect Options are Wrong

A. Snapshot: Snapshot replication is periodic, creating copies at discrete intervals. This method cannot meet the immediate failover or sub-second latency requirements due to inherent data loss (RPO) between snapshots.

B. Transactional: Transactional replication is a database-specific technology that replicates database transactions. It does not apply to the entire virtual machine state, including the operating system and application files.

D. Point-in-time: This is a general term for creating a copy of data as it existed at a specific moment, which includes snapshots. It is not a continuous process and cannot support immediate failover.

References

1. VMware, Inc. (2023). vSphere Storage Documentation, Administering vSphere Virtual Machine Storage, Chapter 8: Virtual Machine Storage Policies. VMware. In the section "Site disaster tolerance," the documentation explains that synchronous replication provides the highest level of availability with a Recovery Point Objective (RPO) of zero, which is essential for immediate failover scenarios. This aligns with the concept of "live" replication.

2. Kyriazis, D., et al. (2013). Disaster Recovery for Infrastructure-as-a-Service Cloud Systems: A Survey. ACM Computing Surveys, 46(1), Article 10. In Section 3.2, "Replication Techniques," the paper contrasts synchronous and asynchronous replication. It states, "Synchronous replication... offers a zero RPO... suitable for mission-critical applications with low tolerance for data loss." This supports the choice of a live/synchronous method for immediate failover. https://doi.org/10.1145/2522968.2522978

3. Microsoft Corporation. (2023). Azure Site Recovery documentation, About Site Recovery. Microsoft Docs. The documentation describes "continuous replication" for disaster recovery of VMs, which provides minimal RPOs. While specific RPO values vary, the principle of continuous or "live" data transfer is fundamental to achieving the low latency and immediate failover required.

Question 3

A company's content management system (CMS) service runs on an laaS cluster on a public cloud. The CMS service is frequently targeted by a malicious threat actor using DDoS. Which of the following should a cloud engineer monitor to identify attacks?
Options
A: Network flow logs
B: Endpoint detection and response logs
C: Cloud provider event logs
D: Instance syslog
Show Answer
Correct Answer:
Network flow logs
Explanation
A Distributed Denial of Service (DDoS) attack is fundamentally a network-based attack designed to overwhelm a target with a massive volume of traffic from multiple sources. Network flow logs capture metadata about all IP traffic traversing a network interface, including source/destination IP addresses, ports, protocols, and the volume of packets/bytes. By monitoring and analyzing these logs, a cloud engineer can identify the characteristic signatures of a DDoS attack, such as an abnormally high volume of traffic from a large number of disparate IP addresses targeting the CMS service. This provides the necessary network-level visibility to detect the attack in its early stages.
Why Incorrect Options are Wrong

B. Endpoint detection and response logs: EDR focuses on malicious activity on an endpoint (e.g., malware, unauthorized processes), not on analyzing incoming network traffic volume from distributed sources.

C. Cloud provider event logs: These logs (e.g., AWS CloudTrail, Azure Activity Log) track management plane API calls and user activity, not the data plane network traffic that constitutes a DDoS attack.

D. Instance syslog: This log contains operating system and application-level events from a single instance. It lacks the network-wide perspective needed to identify a distributed attack pattern.

References

1. Amazon Web Services (AWS) Documentation. VPC Flow Logs. Amazon states that a key use case for VPC Flow Logs is "Monitoring the traffic that is reaching your instance... For example, you can use flow logs to help you diagnose overly restrictive security group rules." This same data is used to identify anomalous traffic patterns indicative of a DDoS attack. Retrieved from: https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html (See section: "Flow log basics").

2. Microsoft Azure Documentation. Azure DDoS Protection overview. Microsoft explains that its protection service works by "monitoring actual traffic utilization and constantly comparing it against the thresholds... When the traffic threshold is exceeded, DDoS mitigation is initiated automatically." This monitoring is based on network flow telemetry, the same data captured in flow logs. Retrieved from: https://learn.microsoft.com/en-us/azure/ddos-protection/ddos-protection-overview (See section: "How DDoS Protection works").

3. Google Cloud Documentation. VPC Flow Logs overview. Google lists "Network monitoring" and "Network forensics" as primary use cases. For forensics, it states, "If an incident occurs, VPC Flow Logs can be used to determine... the traffic flow." This is essential for analyzing a DDoS incident. Retrieved from: https://cloud.google.com/vpc/docs/flow-logs (See section: "Use cases").

4. Carnegie Mellon University, Software Engineering Institute. Situational Awareness for Network Monitoring. In CERT/CC's guide to network monitoring, it emphasizes the importance of flow data (like NetFlow, the precursor to cloud flow logs) for "detecting and analyzing security events, such as denial-of-service (DoS) attacks." Retrieved from: https://resources.sei.cmu.edu/assetfiles/technicalnote/200400400114111.pdf (See Page 11, Section 3.2.2).

Question 4

A cloud engineer needs to integrate a new payment processor with an existing e-commerce website. Which of the following technologies is the best fit for this integration?
Options
A: RPC over SSL
B: Transactional SQL
C: REST API over HTTPS
D: Secure web socket
Show Answer
Correct Answer:
REST API over HTTPS
Explanation
A REST (Representational State Transfer) API (Application Programming Interface) is the industry-standard architectural style for integrating web services. For an e-commerce site to communicate with a payment processor, it needs a secure, scalable, and stateless method. REST APIs use standard HTTP methods (like POST for submitting payment data) and are designed for this type of client-server interaction. Encapsulating the communication within HTTPS (HTTP Secure) ensures that sensitive payment information is encrypted in transit, which is a critical security requirement for handling financial data. This combination provides a robust, secure, and widely supported solution for this integration task.
Why Incorrect Options are Wrong

A. RPC over SSL: Remote Procedure Call (RPC) is an older paradigm that is often more tightly coupled and less flexible than REST for web-based integrations. While secure over SSL, it's not the modern standard.

B. Transactional SQL: This is incorrect. SQL is a language for querying databases. Directly exposing a database to an external payment processor via SQL would be a major security vulnerability and is not an integration protocol.

D. Secure web socket: Web sockets provide persistent, bidirectional communication channels, ideal for real-time applications like chat or live data feeds. This is unnecessary for a standard payment transaction, which is a simple request-response event.

References

1. Fielding, R. T. (2000). Architectural Styles and the Design of Network-based Software Architectures. Doctoral dissertation, University of California, Irvine. In Chapter 5, "Representational State Transfer (REST)," Fielding defines the principles of REST, highlighting its advantages for hypermedia systems like the World Wide Web, including scalability, simplicity, and portability, which are essential for e-commerce integrations. (Available at: https://www.ics.uci.edu/~fielding/pubs/dissertation/restarchstyle.htm)

2. Amazon Web Services (AWS) Documentation. "What is a RESTful API?". AWS, a major cloud provider, defines RESTful APIs as the standard for web-based communication. The documentation states, "REST determines how the API looks like. It stands for โ€œRepresentational State Transferโ€. It is a set of rules that developers follow when they create their API... Most applications on the internet use REST APIs to communicate." This confirms its status as the best fit for web service integration. (Reference: aws.amazon.com/what-is/restful-api/)

3. Microsoft Azure Documentation. "What are APIs?". The official documentation describes how APIs enable communication between applications, with REST being the predominant architectural style for web APIs. It emphasizes the use of HTTP/HTTPS protocols for these interactions, aligning perfectly with the scenario. (Reference: azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-are-apis)

4. Google Cloud Documentation. "API design guide". Google's guide for building APIs for its cloud platform is based on REST principles. It details the use of standard HTTP methods and resource-oriented design, which is the foundation for modern integrations like payment processors. (Reference: cloud.google.com/apis/design)

Question 5

A company that has several branches worldwide needs to facilitate full access to a specific cloud resource to a branch in Spain. Other branches will have only read access. Which of the following is the best way to grant access to the branch in Spain?
Options
A: Set up MFA for the users working at the branch.
B: Create a network security group with required permissions for users in Spain.
C: Apply a rule on the WAF to allow only users in Spain access to the resource.
D: Implement an IPS/IDS to detect unauthorized users.
Show Answer
Correct Answer:
Create a network security group with required permissions for users in Spain.
Explanation
A network security group (NSG) or an equivalent cloud construct (e.g., AWS Security Group, GCP Firewall Rule) is the most appropriate tool for this scenario. NSGs act as a stateful virtual firewall at the network layer, controlling inbound and outbound traffic to resources. By creating a specific rule, an administrator can allow traffic from the known IP address range of the Spanish branch on the ports required for "full access." Concurrently, another rule with a lower priority can be set for all other source IPs, permitting access only on ports associated with "read-only" functions. This directly implements location-based access control as required.
Why Incorrect Options are Wrong

A. Set up MFA for the users working at the branch.

MFA is an authentication control that verifies a user's identity. It does not define or enforce permissions (authorization) like full versus read-only access.

C. Apply a rule on the WAF to allow only users in Spain access to the resource.

A Web Application Firewall (WAF) primarily protects against application-layer attacks (e.g., SQL injection). While it can use IP-based rules, an NSG is the more fundamental and appropriate tool for network-level access control.

D. Implement an IPS/IDS to detect unauthorized users.

Intrusion Detection/Prevention Systems (IDS/IPS) are threat detection and mitigation tools. They monitor for malicious activity, not for defining and enforcing standard access control policies.

References

1. Microsoft Azure Documentation. (2023). Network security groups. Microsoft Learn. In the "Security rules" section, it states, "A network security group contains security rules that allow or deny inbound network traffic... For each rule, you can specify source and destination, port, and protocol." This confirms the capability to create IP-based rules for specific access. Retrieved from Microsoft's official documentation.

2. Amazon Web Services (AWS) Documentation. (2023). Control traffic to resources using security groups. AWS Documentation. The documentation specifies, "A security group acts as a virtual firewall for your instance to control inbound and outbound traffic... you add rules to each security group that allow traffic to or from its associated instances." This supports using security groups for IP-based traffic control. Retrieved from AWS's official documentation.

3. National Institute of Standards and Technology (NIST). (June 2017). NIST Special Publication 800-63B: Digital Identity Guidelines, Authentication and Lifecycle Management. Section 5.1.1, "Memorized Secrets," and subsequent sections on authenticators describe MFA as a mechanism to "authenticate the subscriber to the CSP," confirming its role in identity verification, not authorization. (DOI: https://doi.org/10.6028/NIST.SP.800-63b)

4. Chandrasekaran, K. (2015). Essentials of Cloud Computing. CRC Press, Taylor & Francis Group. Chapter 10, "Cloud Security," distinguishes between network-level firewalls (like NSGs) for controlling access based on network parameters and application-level firewalls (WAFs) for inspecting application data. This academic source clarifies the distinct roles of these technologies.

Question 6

Which of the following network types allows the addition of new features through the use of network function virtualization?
Options
A: Local area network
B: Wide area network
C: Storage area network
D: Software-defined network
Show Answer
Correct Answer:
Software-defined network
Explanation
A Software-Defined Network (SDN) is an architecture that decouples the network control plane from the data forwarding plane, enabling the network to be programmatically controlled. This programmability is the key mechanism that allows for the dynamic addition of new features. Network Function Virtualization (NFV) complements SDN by virtualizing network functions (e.g., firewalls, routers, load balancers) so they can run as software on standard servers. An SDN architecture provides the ideal framework to manage, orchestrate, and chain these virtualized network functions, allowing new features to be deployed rapidly through software rather than by installing new physical hardware.
Why Incorrect Options are Wrong

A. Local area network: This term defines a network by its limited geographical scope (e.g., an office), not by an architecture that inherently supports adding virtualized functions.

B. Wide area network: This term defines a network by its broad geographical scope (e.g., across cities), not by its design for programmatic control and feature addition.

C. Storage area network: This is a specialized network dedicated to providing block-level access to storage devices; it is not designed for general-purpose network services virtualized via NFV.

References

1. European Telecommunications Standards Institute (ETSI). (2014). Network Functions Virtualisation (NFV); Architectural Framework (ETSI GS NFV 002 V1.2.1). Section 4.2, "Relationship between NFV and Software-Defined Networking (SDN)," explains that SDN and NFV are complementary, with SDN being a potential technology to control and route traffic between virtualized network functions.

2. Nunes, B. A. A., Mendonca, M., Nguyen, X. N., Obraczka, K., & Turletti, T. (2014). A survey of software-defined networking: Past, present, and future of programmable networks. IEEE Communications Surveys & Tutorials, 16(3), 1617-1634. Section IV.A, "Network Virtualization," discusses how SDN's abstraction enables the creation of virtual networks and the deployment of network functions. https://doi.org/10.1109/SURV.2014.012214.00001

3. Kreutz, D., Ramos, F. M., Verรญssimo, P. E., Rothenberg, C. E., Azodolmolky, S., & Uhlig, S. (2015). Software-defined networking: A comprehensive survey. Proceedings of the IEEE, 103(1), 14-76. Section V, "Use Cases and Opportunities," details how the SDN architecture facilitates the deployment of middleboxes and other network functions as software services. https://doi.org/10.1109/JPROC.2014.2371999

Question 7

Which of the following migration types is best to use when migrating a highly available application, which is normally hosted on a local VM cluster, for usage with an external user population?
Options
A: Cloud to on-premises
B: Cloud to cloud
C: On-premises to cloud
D: On-premises to on-premises
Show Answer
Correct Answer:
On-premises to cloud
Explanation
The scenario describes an application currently hosted on a "local VM cluster," which is an on-premises environment. The goal is to migrate it to better serve an "external user population." Migrating from an on-premises data center to a public or hybrid cloud environment is the standard approach to achieve greater scalability, high availability, and global accessibility for external users. This process is defined as an on-premises-to-cloud migration, often referred to as Physical-to-Cloud (P2C) or Virtual-to-Cloud (V2C). The cloud's inherent internet-facing infrastructure and distributed nature make it the ideal target for this requirement.
Why Incorrect Options are Wrong

A. Cloud to on-premises: This describes repatriation, moving an application from a cloud provider back to a local data center, which is the opposite of the described scenario.

B. Cloud to cloud: This involves migrating an application between two different cloud environments. The application in the question originates from an on-premises location, not a cloud.

D. On-premises to on-premises: This describes moving an application between two local data centers. This migration type does not inherently provide the global reach and scalability needed for external users.

References

1. National Institute of Standards and Technology (NIST). (2011). NIST Cloud Computing Reference Architecture (NIST Special Publication 500-292).

Section 5.2, Cloud Migration, Page 23: The document defines cloud migration as "the process of moving an organizationโ€™s data and applications from the organizationโ€™s existing on-premise data center to the cloud infrastructure." This directly aligns with the scenario of moving from a local cluster to a platform suitable for external users.

2. Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., ... & Zaharia, M. (2009). Above the Clouds: A Berkeley View of Cloud Computing (Technical Report No. UCB/EECS-2009-28). University of California, Berkeley.

Section 3, Classes of Utility Computing: The paper discusses the economic and technical advantages of moving applications to the cloud, particularly for services that need to scale to serve a large, variable user base, which is characteristic of an "external user population." This supports the rationale for an on-premises-to-cloud migration.

3. Microsoft Azure Documentation. (2023). What is the Cloud Adoption Framework?

"Define strategy" and "Plan" sections: The framework outlines the motivations for moving to the cloud, including "reaching new customers" and "expanding to new geographies." It explicitly details the process of migrating workloads from on-premises environments to the Azure cloud to achieve these goals. This vendor documentation validates the on-premises-to-cloud path for serving external populations.

Question 8

A company's engineering department is conducting a month-long test on the scalability of an in- house-developed software that requires a cluster of 100 or more servers. Which of the following models is the best to use?
Options
A: PaaS
B: SaaS
C: DBaaS
D: laaS
Show Answer
Correct Answer:
laaS
Explanation
Infrastructure as a Service (IaaS) is the most appropriate model as it provides fundamental computing resources, including virtual servers, networking, and storage. This gives the engineering department the maximum level of control needed to provision a large cluster of servers (100+), install custom operating systems and dependencies, and deploy their in-house software for a comprehensive scalability test. The on-demand, pay-as-you-go nature of IaaS is ideal for a temporary, month-long project, allowing the company to access massive computing power without the capital expense of purchasing physical hardware.
Why Incorrect Options are Wrong

A. PaaS abstracts the underlying server infrastructure, which would limit the team's ability to control the environment and install the specific software stack required for their test.

B. SaaS provides ready-to-use software applications, not the underlying infrastructure needed to test a company's own custom-developed software.

C. DBaaS is a specialized service for managing databases. It does not provide the general-purpose server cluster needed to run the application itself.

References

1. Mell, P., & Grance, T. (2011). The NIST Definition of Cloud Computing (NIST Special Publication 800-145). National Institute of Standards and Technology.

Page 2, Section "Infrastructure as a Service (IaaS)": "The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications." This directly supports the need to deploy in-house software on a large number of servers.

2. Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., Lee, G., Patterson, D., Rabkin, A., Stoica, I., & Zaharia, M. (2009). Above the Clouds: A Berkeley View of Cloud Computing (Technical Report No. UCB/EECS-2009-28). University of California, Berkeley.

Page 5, Section 3.1: Discusses how IaaS enables "pay-as-you-go" access to infrastructure, which is ideal for short-term, large-scale needs like the month-long test described, a use case often termed "batch processing" or "elastic computing."

3. Microsoft Azure Documentation. (n.d.). What is Infrastructure as a Service (IaaS)?

Section "Common IaaS business scenarios": "Test and development. Teams can quickly set up and dismantle test and development environments, bringing new applications to market faster. IaaS makes it quick and economical to scale up dev-test environments up and down." This explicitly validates using IaaS for temporary, large-scale testing.

Question 9

An organization wants to ensure its data is protected in the event of a natural disaster. To support this effort, the company has rented a colocation space in another part of the country. Which of the following disaster recovery practices can be used to best protect the data?
Options
A: On-site
B: Replication
C: Retention
D: Off-site
Show Answer
Correct Answer:
Off-site
Explanation
The core of the question is protecting data from a natural disaster by using a geographically separate facility. This practice is known as off-site disaster recovery. By renting a colocation space in another part of the country, the organization establishes a secondary location that is unlikely to be affected by the same disaster that impacts the primary site. This geographic separation is the fundamental principle of an off-site strategy, ensuring business continuity and data availability in the event of a regional catastrophe.
Why Incorrect Options are Wrong

A. On-site: This practice involves keeping data backups or redundant systems at the same physical location as the primary data, offering no protection against a site-wide disaster like a fire or flood.

B. Replication: This is the process of copying data. While replication is a mechanism used to send data to an off-site location, "off-site" is the specific disaster recovery practice described in the scenario.

C. Retention: This refers to policies that dictate how long data is stored. Data retention is unrelated to the physical location of data for disaster recovery purposes.

References

1. National Institute of Standards and Technology (NIST) Special Publication 800-34 Rev. 1, Contingency Planning Guide for Federal Information Systems. Section 4.3.2, "Alternate Storage Site," states: "An alternate storage site is used for storage of backup media... The site should be geographically separated from the primary site so as not to be susceptible to the same hazards." This directly supports the concept of using a geographically distant location (off-site) for disaster protection.

2. Amazon Web Services (AWS), Disaster Recovery of Workloads on AWS: Recovery in the Cloud (July 2021). Page 6, in the section "Backup and Restore," discusses storing backups in a separate AWS Region. It states, "By replicating your data to another Region, you can protect your data in the unlikely event of a regional disruption." This exemplifies the off-site practice in a cloud context.

3. Microsoft Azure Documentation, Disaster recovery and high availability for Azure applications. In the section "Azure services that provide disaster recovery," it describes Azure Site Recovery, which "replicates workloads to a secondary location." The use of a secondary, geographically distinct location is the definition of an off-site strategy.

Question 10

Which of the following do developers use to keep track of changes made during software development projects?
Options
A: Code drifting
B: Code control
C: Code testing
D: Code versioning
Show Answer
Correct Answer:
Code versioning
Explanation
Code versioning, also known as version control or source control, is the standard practice and system used by developers to manage and track changes to source code and other project files over time. It creates a historical record of all modifications, enabling developers to revert to previous states, compare changes, and collaborate on a shared codebase without overwriting each other's work. Tools like Git, Subversion (SVN), and Mercurial are common implementations of code versioning.
Why Incorrect Options are Wrong

A. Code drifting: This term, more commonly known as configuration drift, describes the phenomenon where infrastructure configurations diverge from their intended baseline, not the tracking of software code changes.

B. Code control: This is a generic and non-standard term. While versioning is a form of "controlling" code, "code versioning" is the precise, industry-accepted terminology for the practice in question.

C. Code testing: This is the process of evaluating software functionality to identify defects. It is a distinct phase in the development lifecycle and does not involve tracking historical changes to the code.

References

1. CompTIA Cloud+ Certification Exam Objectives (CV0-004). (2023). CompTIA. Section 2.4, "Given a scenario, use appropriate tools to deploy cloud services," explicitly lists "Version control" as a key tool for deployment and automation.

2. Parr, T. (2012). The Definitive ANTLR 4 Reference. The Pragmatic Bookshelf. In the context of software development best practices, the text discusses the necessity of source control systems: "You should also be using a source code control system such as Perforce, Subversion, or Git to manage your project files." (Chapter 1, Section: Building ANTLR, p. 10). This highlights versioning as the method for managing project files.

3. MIT OpenCourseWare. (2016). 6.005 Software Construction, Spring 2016. Massachusetts Institute of Technology. In "Reading 1: Static Checking," the course material introduces version control as a fundamental tool for managing software projects: "Version control is a system that keeps records of your changes."

4. AWS Documentation. (n.d.). What is Version Control? Amazon Web Services. Retrieved from https://aws.amazon.com/devops/version-control/. The official documentation defines the practice: "Version control, also known as source control, is the practice of tracking and managing changes to software code."

Sale!
Total Questions228
Last Update Check October 19, 2025
Online Simulator PDF Downloads
50,000+ Students Helped So Far
$30.00 $60.00 50% off
Rated 5 out of 5
5.0 (4 reviews)

Instant Download & Simulator Access

Secure SSL Encrypted Checkout

100% Money Back Guarantee

What Users Are Saying:

Rated 5 out of 5

โ€œThe practice questions were spot on. Felt like I had already seen half the exam. Passed on my first try!โ€

Sarah J. (Verified Buyer)

Download Free Demo PDF Free CV0-004 Practice Test
Shopping Cart
Scroll to Top

FLASH OFFER

Days
Hours
Minutes
Seconds

avail $6 DISCOUNT on YOUR PURCHASE