Study Smarter for the CV0-004 Exam with Our Free and Reliable CV0-004 Exam Questions โ Updated for 2025.
At Cert Empire, we are focused on delivering the most accurate and up-to-date exam questions for students preparing for the CompTIA CV0-004 Exam. To make preparation easier, weโve made parts of our CV0-004 exam resources free for everyone. You can practice as much as you want with Free CV0-004 Practice Test.
Question 1
Show Answer
A. Resource metering: This is the process of measuring resource consumption. While it enables the pay-as-you-go model, it is not the cost-saving strategy itself.
B. Reserved resources: This model offers discounts for a long-term commitment (e.g., 1-3 years) and is best suited for stable, predictable workloads, not necessarily for leveraging elasticity to reduce costs.
C. Dedicated host: This provides a physical server for a single tenant. It is the most expensive option and is typically used for compliance or software licensing, directly contradicting the goal of reducing expenses.
---
1. National Institute of Standards and Technology (NIST) Special Publication 800-145, "The NIST Definition of Cloud Computing":
Section 2, Page 2: Defines "On-demand self-service" and "Measured service" as essential characteristics of cloud computing. The pay-as-you-go model is a direct implementation of these principles, allowing consumers to provision resources as needed and pay only for what they use. The stateless application in the scenario is perfectly suited to leverage this on-demand nature for cost efficiency.
2. Amazon Web Services (AWS) Documentation, "Amazon EC2 Pricing":
On-Demand Pricing Section: "With On-Demand instances, you pay for compute capacity by the hour or the second with no long-term commitments... This frees you from the costs and complexities of planning, purchasing, and maintaining hardware... [It is recommended for] applications with short-term, spiky, or unpredictable workloads that cannot be interrupted." This directly supports the use of a pay-as-you-go model for an application designed for elasticity to reduce costs.
Reserved Instances & Dedicated Hosts Sections: The documentation contrasts this with Reserved Instances, which are for "applications with steady state or predictable usage," and Dedicated Hosts, which are physical servers that "can help you reduce costs by allowing you to use your existing server-bound software licenses." These use cases do not align with the scenario's primary goal of cost reduction through elasticity.
3. Microsoft Azure Documentation, "Virtual Machines pricing":
Pay as you go Section: Describes this model as ideal for "running applications with short-term or unpredictable workloads where there is no long-term commitment." This aligns with the scenario where an engineer wants to leverage the cloud's elasticity to match cost to actual usage, thus reducing waste.
Reserved Virtual Machine Instances Section: Explains that reservations are for workloads with "predictable, consistent traffic" and require a "one-year or three-year term," which is less flexible than pay-as-you-go.
4. Armbrust, M., et al. (2009). "Above the Clouds: A Berkeley View of Cloud Computing." University of California, Berkeley, Technical Report No. UCB/EECS-2009-28.
Section 3.1, Economic Advantages: The paper states, "Cloud Computing enables a pay-as-you-go model, where you pay only for what you use... An attraction of Cloud Computing is that computing resources can be rapidly provisioned and de-provisioned on a fine-grained basis... allowing clouds to offer an 'infinite' pool of resources in a pay-as-you-go manner." This academic source establishes the fundamental economic benefit of the pay-as-you-go model in leveraging elasticity, which is the core of the question.
Question 2
Show Answer
A. Snapshot: Snapshot replication is periodic, creating copies at discrete intervals. This method cannot meet the immediate failover or sub-second latency requirements due to inherent data loss (RPO) between snapshots.
B. Transactional: Transactional replication is a database-specific technology that replicates database transactions. It does not apply to the entire virtual machine state, including the operating system and application files.
D. Point-in-time: This is a general term for creating a copy of data as it existed at a specific moment, which includes snapshots. It is not a continuous process and cannot support immediate failover.
1. VMware, Inc. (2023). vSphere Storage Documentation, Administering vSphere Virtual Machine Storage, Chapter 8: Virtual Machine Storage Policies. VMware. In the section "Site disaster tolerance," the documentation explains that synchronous replication provides the highest level of availability with a Recovery Point Objective (RPO) of zero, which is essential for immediate failover scenarios. This aligns with the concept of "live" replication.
2. Kyriazis, D., et al. (2013). Disaster Recovery for Infrastructure-as-a-Service Cloud Systems: A Survey. ACM Computing Surveys, 46(1), Article 10. In Section 3.2, "Replication Techniques," the paper contrasts synchronous and asynchronous replication. It states, "Synchronous replication... offers a zero RPO... suitable for mission-critical applications with low tolerance for data loss." This supports the choice of a live/synchronous method for immediate failover. https://doi.org/10.1145/2522968.2522978
3. Microsoft Corporation. (2023). Azure Site Recovery documentation, About Site Recovery. Microsoft Docs. The documentation describes "continuous replication" for disaster recovery of VMs, which provides minimal RPOs. While specific RPO values vary, the principle of continuous or "live" data transfer is fundamental to achieving the low latency and immediate failover required.
Question 3
Show Answer
B. Endpoint detection and response logs: EDR focuses on malicious activity on an endpoint (e.g., malware, unauthorized processes), not on analyzing incoming network traffic volume from distributed sources.
C. Cloud provider event logs: These logs (e.g., AWS CloudTrail, Azure Activity Log) track management plane API calls and user activity, not the data plane network traffic that constitutes a DDoS attack.
D. Instance syslog: This log contains operating system and application-level events from a single instance. It lacks the network-wide perspective needed to identify a distributed attack pattern.
1. Amazon Web Services (AWS) Documentation. VPC Flow Logs. Amazon states that a key use case for VPC Flow Logs is "Monitoring the traffic that is reaching your instance... For example, you can use flow logs to help you diagnose overly restrictive security group rules." This same data is used to identify anomalous traffic patterns indicative of a DDoS attack. Retrieved from: https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html (See section: "Flow log basics").
2. Microsoft Azure Documentation. Azure DDoS Protection overview. Microsoft explains that its protection service works by "monitoring actual traffic utilization and constantly comparing it against the thresholds... When the traffic threshold is exceeded, DDoS mitigation is initiated automatically." This monitoring is based on network flow telemetry, the same data captured in flow logs. Retrieved from: https://learn.microsoft.com/en-us/azure/ddos-protection/ddos-protection-overview (See section: "How DDoS Protection works").
3. Google Cloud Documentation. VPC Flow Logs overview. Google lists "Network monitoring" and "Network forensics" as primary use cases. For forensics, it states, "If an incident occurs, VPC Flow Logs can be used to determine... the traffic flow." This is essential for analyzing a DDoS incident. Retrieved from: https://cloud.google.com/vpc/docs/flow-logs (See section: "Use cases").
4. Carnegie Mellon University, Software Engineering Institute. Situational Awareness for Network Monitoring. In CERT/CC's guide to network monitoring, it emphasizes the importance of flow data (like NetFlow, the precursor to cloud flow logs) for "detecting and analyzing security events, such as denial-of-service (DoS) attacks." Retrieved from: https://resources.sei.cmu.edu/assetfiles/technicalnote/200400400114111.pdf (See Page 11, Section 3.2.2).
Question 4
Show Answer
A. RPC over SSL: Remote Procedure Call (RPC) is an older paradigm that is often more tightly coupled and less flexible than REST for web-based integrations. While secure over SSL, it's not the modern standard.
B. Transactional SQL: This is incorrect. SQL is a language for querying databases. Directly exposing a database to an external payment processor via SQL would be a major security vulnerability and is not an integration protocol.
D. Secure web socket: Web sockets provide persistent, bidirectional communication channels, ideal for real-time applications like chat or live data feeds. This is unnecessary for a standard payment transaction, which is a simple request-response event.
1. Fielding, R. T. (2000). Architectural Styles and the Design of Network-based Software Architectures. Doctoral dissertation, University of California, Irvine. In Chapter 5, "Representational State Transfer (REST)," Fielding defines the principles of REST, highlighting its advantages for hypermedia systems like the World Wide Web, including scalability, simplicity, and portability, which are essential for e-commerce integrations. (Available at: https://www.ics.uci.edu/~fielding/pubs/dissertation/restarchstyle.htm)
2. Amazon Web Services (AWS) Documentation. "What is a RESTful API?". AWS, a major cloud provider, defines RESTful APIs as the standard for web-based communication. The documentation states, "REST determines how the API looks like. It stands for โRepresentational State Transferโ. It is a set of rules that developers follow when they create their API... Most applications on the internet use REST APIs to communicate." This confirms its status as the best fit for web service integration. (Reference: aws.amazon.com/what-is/restful-api/)
3. Microsoft Azure Documentation. "What are APIs?". The official documentation describes how APIs enable communication between applications, with REST being the predominant architectural style for web APIs. It emphasizes the use of HTTP/HTTPS protocols for these interactions, aligning perfectly with the scenario. (Reference: azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-are-apis)
4. Google Cloud Documentation. "API design guide". Google's guide for building APIs for its cloud platform is based on REST principles. It details the use of standard HTTP methods and resource-oriented design, which is the foundation for modern integrations like payment processors. (Reference: cloud.google.com/apis/design)
Question 5
Show Answer
A. Set up MFA for the users working at the branch.
MFA is an authentication control that verifies a user's identity. It does not define or enforce permissions (authorization) like full versus read-only access.
C. Apply a rule on the WAF to allow only users in Spain access to the resource.
A Web Application Firewall (WAF) primarily protects against application-layer attacks (e.g., SQL injection). While it can use IP-based rules, an NSG is the more fundamental and appropriate tool for network-level access control.
D. Implement an IPS/IDS to detect unauthorized users.
Intrusion Detection/Prevention Systems (IDS/IPS) are threat detection and mitigation tools. They monitor for malicious activity, not for defining and enforcing standard access control policies.
1. Microsoft Azure Documentation. (2023). Network security groups. Microsoft Learn. In the "Security rules" section, it states, "A network security group contains security rules that allow or deny inbound network traffic... For each rule, you can specify source and destination, port, and protocol." This confirms the capability to create IP-based rules for specific access. Retrieved from Microsoft's official documentation.
2. Amazon Web Services (AWS) Documentation. (2023). Control traffic to resources using security groups. AWS Documentation. The documentation specifies, "A security group acts as a virtual firewall for your instance to control inbound and outbound traffic... you add rules to each security group that allow traffic to or from its associated instances." This supports using security groups for IP-based traffic control. Retrieved from AWS's official documentation.
3. National Institute of Standards and Technology (NIST). (June 2017). NIST Special Publication 800-63B: Digital Identity Guidelines, Authentication and Lifecycle Management. Section 5.1.1, "Memorized Secrets," and subsequent sections on authenticators describe MFA as a mechanism to "authenticate the subscriber to the CSP," confirming its role in identity verification, not authorization. (DOI: https://doi.org/10.6028/NIST.SP.800-63b)
4. Chandrasekaran, K. (2015). Essentials of Cloud Computing. CRC Press, Taylor & Francis Group. Chapter 10, "Cloud Security," distinguishes between network-level firewalls (like NSGs) for controlling access based on network parameters and application-level firewalls (WAFs) for inspecting application data. This academic source clarifies the distinct roles of these technologies.
Question 6
Show Answer
A. Local area network: This term defines a network by its limited geographical scope (e.g., an office), not by an architecture that inherently supports adding virtualized functions.
B. Wide area network: This term defines a network by its broad geographical scope (e.g., across cities), not by its design for programmatic control and feature addition.
C. Storage area network: This is a specialized network dedicated to providing block-level access to storage devices; it is not designed for general-purpose network services virtualized via NFV.
1. European Telecommunications Standards Institute (ETSI). (2014). Network Functions Virtualisation (NFV); Architectural Framework (ETSI GS NFV 002 V1.2.1). Section 4.2, "Relationship between NFV and Software-Defined Networking (SDN)," explains that SDN and NFV are complementary, with SDN being a potential technology to control and route traffic between virtualized network functions.
2. Nunes, B. A. A., Mendonca, M., Nguyen, X. N., Obraczka, K., & Turletti, T. (2014). A survey of software-defined networking: Past, present, and future of programmable networks. IEEE Communications Surveys & Tutorials, 16(3), 1617-1634. Section IV.A, "Network Virtualization," discusses how SDN's abstraction enables the creation of virtual networks and the deployment of network functions. https://doi.org/10.1109/SURV.2014.012214.00001
3. Kreutz, D., Ramos, F. M., Verรญssimo, P. E., Rothenberg, C. E., Azodolmolky, S., & Uhlig, S. (2015). Software-defined networking: A comprehensive survey. Proceedings of the IEEE, 103(1), 14-76. Section V, "Use Cases and Opportunities," details how the SDN architecture facilitates the deployment of middleboxes and other network functions as software services. https://doi.org/10.1109/JPROC.2014.2371999
Question 7
Show Answer
A. Cloud to on-premises: This describes repatriation, moving an application from a cloud provider back to a local data center, which is the opposite of the described scenario.
B. Cloud to cloud: This involves migrating an application between two different cloud environments. The application in the question originates from an on-premises location, not a cloud.
D. On-premises to on-premises: This describes moving an application between two local data centers. This migration type does not inherently provide the global reach and scalability needed for external users.
1. National Institute of Standards and Technology (NIST). (2011). NIST Cloud Computing Reference Architecture (NIST Special Publication 500-292).
Section 5.2, Cloud Migration, Page 23: The document defines cloud migration as "the process of moving an organizationโs data and applications from the organizationโs existing on-premise data center to the cloud infrastructure." This directly aligns with the scenario of moving from a local cluster to a platform suitable for external users.
2. Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., ... & Zaharia, M. (2009). Above the Clouds: A Berkeley View of Cloud Computing (Technical Report No. UCB/EECS-2009-28). University of California, Berkeley.
Section 3, Classes of Utility Computing: The paper discusses the economic and technical advantages of moving applications to the cloud, particularly for services that need to scale to serve a large, variable user base, which is characteristic of an "external user population." This supports the rationale for an on-premises-to-cloud migration.
3. Microsoft Azure Documentation. (2023). What is the Cloud Adoption Framework?
"Define strategy" and "Plan" sections: The framework outlines the motivations for moving to the cloud, including "reaching new customers" and "expanding to new geographies." It explicitly details the process of migrating workloads from on-premises environments to the Azure cloud to achieve these goals. This vendor documentation validates the on-premises-to-cloud path for serving external populations.
Question 8
Show Answer
A. PaaS abstracts the underlying server infrastructure, which would limit the team's ability to control the environment and install the specific software stack required for their test.
B. SaaS provides ready-to-use software applications, not the underlying infrastructure needed to test a company's own custom-developed software.
C. DBaaS is a specialized service for managing databases. It does not provide the general-purpose server cluster needed to run the application itself.
1. Mell, P., & Grance, T. (2011). The NIST Definition of Cloud Computing (NIST Special Publication 800-145). National Institute of Standards and Technology.
Page 2, Section "Infrastructure as a Service (IaaS)": "The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications." This directly supports the need to deploy in-house software on a large number of servers.
2. Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., Lee, G., Patterson, D., Rabkin, A., Stoica, I., & Zaharia, M. (2009). Above the Clouds: A Berkeley View of Cloud Computing (Technical Report No. UCB/EECS-2009-28). University of California, Berkeley.
Page 5, Section 3.1: Discusses how IaaS enables "pay-as-you-go" access to infrastructure, which is ideal for short-term, large-scale needs like the month-long test described, a use case often termed "batch processing" or "elastic computing."
3. Microsoft Azure Documentation. (n.d.). What is Infrastructure as a Service (IaaS)?
Section "Common IaaS business scenarios": "Test and development. Teams can quickly set up and dismantle test and development environments, bringing new applications to market faster. IaaS makes it quick and economical to scale up dev-test environments up and down." This explicitly validates using IaaS for temporary, large-scale testing.
Question 9
Show Answer
A. On-site: This practice involves keeping data backups or redundant systems at the same physical location as the primary data, offering no protection against a site-wide disaster like a fire or flood.
B. Replication: This is the process of copying data. While replication is a mechanism used to send data to an off-site location, "off-site" is the specific disaster recovery practice described in the scenario.
C. Retention: This refers to policies that dictate how long data is stored. Data retention is unrelated to the physical location of data for disaster recovery purposes.
1. National Institute of Standards and Technology (NIST) Special Publication 800-34 Rev. 1, Contingency Planning Guide for Federal Information Systems. Section 4.3.2, "Alternate Storage Site," states: "An alternate storage site is used for storage of backup media... The site should be geographically separated from the primary site so as not to be susceptible to the same hazards." This directly supports the concept of using a geographically distant location (off-site) for disaster protection.
2. Amazon Web Services (AWS), Disaster Recovery of Workloads on AWS: Recovery in the Cloud (July 2021). Page 6, in the section "Backup and Restore," discusses storing backups in a separate AWS Region. It states, "By replicating your data to another Region, you can protect your data in the unlikely event of a regional disruption." This exemplifies the off-site practice in a cloud context.
3. Microsoft Azure Documentation, Disaster recovery and high availability for Azure applications. In the section "Azure services that provide disaster recovery," it describes Azure Site Recovery, which "replicates workloads to a secondary location." The use of a secondary, geographically distinct location is the definition of an off-site strategy.
Question 10
Show Answer
A. Code drifting: This term, more commonly known as configuration drift, describes the phenomenon where infrastructure configurations diverge from their intended baseline, not the tracking of software code changes.
B. Code control: This is a generic and non-standard term. While versioning is a form of "controlling" code, "code versioning" is the precise, industry-accepted terminology for the practice in question.
C. Code testing: This is the process of evaluating software functionality to identify defects. It is a distinct phase in the development lifecycle and does not involve tracking historical changes to the code.
1. CompTIA Cloud+ Certification Exam Objectives (CV0-004). (2023). CompTIA. Section 2.4, "Given a scenario, use appropriate tools to deploy cloud services," explicitly lists "Version control" as a key tool for deployment and automation.
2. Parr, T. (2012). The Definitive ANTLR 4 Reference. The Pragmatic Bookshelf. In the context of software development best practices, the text discusses the necessity of source control systems: "You should also be using a source code control system such as Perforce, Subversion, or Git to manage your project files." (Chapter 1, Section: Building ANTLR, p. 10). This highlights versioning as the method for managing project files.
3. MIT OpenCourseWare. (2016). 6.005 Software Construction, Spring 2016. Massachusetts Institute of Technology. In "Reading 1: Static Checking," the course material introduces version control as a fundamental tool for managing software projects: "Version control is a system that keeps records of your changes."
4. AWS Documentation. (n.d.). What is Version Control? Amazon Web Services. Retrieved from https://aws.amazon.com/devops/version-control/. The official documentation defines the practice: "Version control, also known as source control, is the practice of tracking and managing changes to software code."
Question 11
Show Answer
A. Configuring page file/swap metrics: This only tracks the usage of virtual memory on disk, which is an indicator of memory pressure, not a direct measurement of memory usage by individual processes.
C. Scheduling a script to collect the data: This is a custom, non-native solution. While possible, it requires manual development and maintenance and is less integrated and reliable than using the purpose-built agent provided by the cloud platform.
D. Enabling memory monitoring in the VM configuration: This option typically enables hypervisor-level memory metrics, which report the total memory consumed by the VM as a whole, but lack the visibility to report on individual processes running inside the guest OS.
1. Amazon Web Services (AWS) Documentation. The CloudWatch agent is required to collect guest OS-level metrics. "By default, EC2 instances send hypervisor-visible metrics to CloudWatch... To collect metrics from the operating system or from applications, you must install the CloudWatch agent."
Source: AWS Documentation, "The metrics that the CloudWatch agent collects," Section: "Predefined metric sets for the CloudWatch agent."
2. Microsoft Azure Documentation. The Azure Monitor agent is used to collect in-depth data from the guest operating system of virtual machines. "Use the Azure Monitor agent to collect guest operating system data from Azure... virtual machines... It collects data from the guest operating system and delivers it to Azure Monitor."
Source: Microsoft Learn, "Azure Monitor agent overview," Introduction section.
3. Google Cloud Documentation. The Ops Agent is Google's solution for collecting detailed telemetry from within Compute Engine instances. "The Ops Agent is the primary agent for collecting telemetry from your Compute Engine instances. It collects both logs and metrics." The agent can be configured to collect process metrics.
Source: Google Cloud Documentation, "Ops Agent overview," What the Ops Agent collects section.
4. Armbrust, M., et al. (2010). A View of Cloud Computing. This foundational academic paper from UC Berkeley discusses the challenges of cloud monitoring, implying the need for mechanisms beyond the hypervisor to understand application-level performance. The distinction between what the infrastructure provider can see (hypervisor-level) and what the user needs to see (in-guest) necessitates agent-based approaches for detailed monitoring.
Source: Communications of the ACM, 53(4), 50-58. Section 5.3, "Monitoring and Auditing." DOI: https://doi.org/10.1145/1721654.1721672
Question 12
Show Answer
A. The security team modified user permissions.
This would typically result in an "Access Denied" or "403 Forbidden" error after a successful connection, not a connection failure related to the TLS protocol version.
C. Privileged access was implemented.
Privileged Access Management (PAM) controls administrative accounts and elevated permissions; it does not govern standard user access to a web application.
D. The firewall was modified.
While a firewall can block traffic, rules are typically based on IP addresses and ports, not the specific TLS version. A server-side protocol configuration is a more direct cause.
1. National Institute of Standards and Technology (NIST). (2019). Special Publication (SP) 800-52r2, Guidelines for the Selection, Configuration, and Use of Transport Layer Security (TLS) Implementations.
Section 3.1, Protocol Versions, Page 6: "Servers that support government-only applications shall be configured to use TLS 1.3 and should be configured to use TLS 1.2. These servers shall not be configured to use TLS 1.1 and shall not be configured to use TLS 1.0, SSL 3.0, or SSL 2.0." This document mandates the disabling of TLS 1.1 on servers to enhance security.
2. Internet Engineering Task Force (IETF). (2021). RFC 8996: Deprecating TLS 1.0 and TLS 1.1.
Abstract: "This document formally deprecates Transport Layer Security (TLS) versions 1.0 (RFC 2246) and 1.1 (RFC 4346)... These versions lack support for current and recommended cryptographic algorithms and mechanisms, and various government and industry profiles now mandate avoiding these old TLS versions." This RFC provides the official rationale for discontinuing TLS 1.1 due to its vulnerabilities.
3. Microsoft Corporation. (2023). Solving the TLS 1.0 Problem, 2nd Edition. Security documentation.
Section: Disabling TLS 1.0 and 1.1: The document details the security risks of older TLS versions and provides technical guidance for administrators to disable them across their infrastructure to mitigate vulnerabilities, which directly aligns with the scenario in the question.
Question 13
Show Answer
A. Persistent: This is incorrect because the data is intentionally deleted from the local storage after being moved. Persistent storage is designed for long-term data retention.
C. Differential: This is a backup methodology that captures changes made since the last full backup; it is not a type of storage.
D. Incremental: This is a backup methodology that captures changes made since the last backup of any type; it is not a type of storage.
---
1. Amazon Web Services (AWS) Documentation. "Amazon EC2 Instance Store." In Amazon EC2 User Guide for Linux Instances. "An instance store provides temporary block-level storage for your instance... Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content..." This aligns with the scenario where local storage acts as a temporary buffer.
2. Google Cloud Documentation. "Local SSDs overview." In Compute Engine Documentation. "The data that you store on a local SSD persists only until the instance is stopped or deleted. For this reason, local SSDs are only suitable for temporary storage such as cache, processing space, or low value data." This source defines the temporary nature of ephemeral storage.
3. Armbrust, M., et al. (2009). Above the Clouds: A Berkeley View of Cloud Computing. University of California, Berkeley, EECS Department, Technical Report No. UCB/EECS-2009-28. Section 3.2, "Storage," distinguishes between persistent storage services (e.g., Amazon S3) and temporary storage that is tied to the lifecycle of a compute instance, highlighting the concept of non-persistent, or ephemeral, data.
4. Microsoft Azure Documentation. "Temporary disk on Azure VMs." In Azure Virtual Machines Documentation. "The temporary disk provides temporary storage for applications and processes and is intended to only store data such as page or swap files... Data on the temporary disk may be lost during a maintenance event..." This further exemplifies the non-permanent nature of ephemeral storage in a cloud context.
Question 14
Show Answer
A. The engineer implemented MFA to access the WAF configurations.
This hardens the WAF's management plane, not the traffic flow to the protected applications, which is the primary function described.
C. The engineer installed the latest security patches on the WAF.
Patching is a critical maintenance activity for hardening, but it is not typically described as a configuration change in the context of traffic filtering rules.
D. The engineer completed an upgrade from TLS version 1.1 to version 1.3.
This is a valid hardening configuration, but it does not utilize the key piece of information provided in the scenarioโthat the company operates exclusively in North America.
---
1. AWS WAF Developer Guide. (Vendor Documentation). AWS documentation explicitly describes using a "Geographic match rule statement" to inspect and control web requests based on their country of origin. This directly supports the concept of geoblocking as a WAF configuration.
Reference: AWS WAF Developer Guide, "Rule statement list," Section: "Geographic match rule statement."
2. Microsoft Azure Documentation. (Vendor Documentation). Azure's documentation for its WAF details the creation of custom rules, which can use "Geographical location" as a match condition to allow or block traffic based on the client's IP address origin.
Reference: Microsoft Docs, "Custom rules for Web Application Firewall v2 on Azure Application Gateway," Section: "Match variables."
3. NIST Special Publication 800-53 Revision 5. (Peer-Reviewed Academic Publication/Standard). This publication outlines security and privacy controls. Control AC-4, "Information Flow Enforcement," and its enhancement AC-4(17) "Geolocation" specify the enforcement of information flow control based on the geolocation of the source, validating this as a standard security practice.
Reference: NIST SP 800-53 Rev. 5, Security and Privacy Controls for Information Systems and Organizations, Page 101, Control: AC-4(17).
4. Cloudflare Learning Center. (Vendor Documentation). Cloudflare, a major provider of WAF services, explains IP Access Rules, which can be used to block traffic from specific countries. This is presented as a primary method for securing applications from regional threats.
Reference: Cloudflare Learning Center, "What is a WAF?", Section: "How does a WAF work?". The article discusses WAF policies, including those based on geolocation.
Question 15
Show Answer
A. Blue-green: This strategy is not low-cost because it requires running two identical, parallel production environments simultaneously, which doubles the infrastructure expense during the deployment process.
C. In-place: This method, also known as a recreate deployment, involves stopping the application, deploying the new version, and restarting, which inherently causes service interruptions.
D. Canary: While a phased approach, a canary release is primarily for risk mitigation by testing new code on a small subset of users and can add complexity and overhead compared to a straightforward rolling update.
1. Google Cloud Documentation, "Application deployment and testing strategies." This document describes a rolling update as a strategy where you "slowly replace instances of the previous version of your application with instances of the new version... a rolling update avoids downtime." It contrasts this with blue-green, which has a higher "monetary cost" due to resource duplication. (See section: "Rolling update deployment strategy").
2. Amazon Web Services (AWS) Whitepaper, "Blue/Green Deployments on AWS," PDF, Page 4. The paper states, "A potential downside to this [blue-green] approach is that you will have double the resources running in production... This will result in a higher bill for the duration of the upgrade." This confirms the high-cost nature of blue-green deployments.
3. Red Hat OpenShift Container Platform 4.6 Documentation, "Understanding deployment strategies." The documentation explains that the "Rolling" strategy (the default in OpenShift/Kubernetes) "wait[s] for new pods to become ready... before scaling down the old components. If a significant issue occurs, the rolling deployment can be aborted." This highlights its zero-downtime and phased nature without requiring duplicate infrastructure. (See section: "Rolling Strategy").
Question 16
Show Answer
A. This describes the method of scaling (horizontal) but not the automated process for triggering it, which is the core of the question's requirements for seamless and cost-effective management.
C. Adjusting resources via the cloud portal is a manual process. This contradicts the requirements for automation and seamless operation, as it would require constant monitoring and intervention.
D. Scheduled scaling is not optimal for a variable load. It risks either over-provisioning resources (increasing costs) if the sale is less popular than expected or under-provisioning (causing outages) if it is more popular.
1. National Institute of Standards and Technology (NIST) Special Publication 800-145, The NIST Definition of Cloud Computing.
Reference: Page 2, Section 2, "Essential Characteristics." The document defines "Rapid elasticity" as a key characteristic where "Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand." This directly supports the principle of load-triggered adjustments.
2. Amazon Web Services (AWS) Documentation, "What is AWS Auto Scaling?".
Reference: AWS Auto Scaling User Guide. The documentation states, "AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost." It describes dynamic scaling policies that respond to changing demand, which aligns with allowing the load to trigger adjustments.
3. Microsoft Azure Documentation, "Overview of autoscale in Microsoft Azure".
Reference: Azure Monitor documentation. It explains, "Autoscale allows you to have the right amount of resources running to handle the load on your app. It allows you to add resources to handle increases in load (scale out) and also save money by removing resources that are sitting idle (scale in)." This confirms that load-based triggers are the standard for cost-effective, automated scaling.
4. Erl, T., Mahmood, Z., & Puttini, R. (2013). Cloud Computing: Concepts, Technology & Architecture. Prentice Hall.
Reference: Chapter 5, Section 5.3, "Cloud Characteristics." The text describes the "Elastic Resource Capacity" characteristic, which is enabled by an "Automated Scaling Listener" mechanism that monitors requests and triggers the automatic allocation of IT resources in response to load fluctuations. This academic source validates option B as the correct architectural approach.
Question 17
An organization's critical data was exfiltrated from a computer system in a cyberattack. A cloud analyst wants to identify the root cause and is reviewing the following security logs of a software web application:
"2021/12/18 09:33:12" "10. 34. 32.18" "104. 224. 123. 119" "POST /
login.php?u=administrator&p=or%201%20=1"
"2021/12/18 09:33:13" "10.34. 32.18" "104. 224. 123.119" "POST /login.
php?u=administrator&p=%27%0A"
"2021/12/18 09:33:14" "10. 34. 32.18" "104. 224. 123. 119" "POST /login.
php?u=administrator&p=%26"
"2021/12/18 09:33:17" "10.34. 32.18" "104. 224. 123.119" "POST /
login.php?u=administrator&p=%3B"
"2021/12/18 09:33:12" "10.34. 32. 18" "104. 224. 123. 119" "POST / login.
php?u=admin&p=or%201%20=1"
"2021/12/18 09:33:19" "10.34.32.18" "104. 224. 123.119" "POST / login. php?u=admin&p=%27%0A"
"2021/12/18 09:33:21" "10. 34. 32.18" "104.224. 123.119" "POST / login. php?u=admin&p=%26"
"2021/12/18 09:33:23" "10. 34. 32.18" "104. 224. 123.119" "POST / login. php?u=admin&p=%3B"
Which of the following types of attacks occurred?
Show Answer
B. Cross-site scripting: This is incorrect because the logs show SQL syntax injection, not the injection of client-side scripts (e.g., or tags) into a web page.
C. Reuse of leaked credentials: This is incorrect as the attacker is not using a valid, previously compromised password but is instead attempting to bypass the login mechanism with malformed input.
D. Privilege escalation: This describes a potential outcome or goal of an attack, not the attack method itself. The specific technique evidenced in the logs is SQL injection.
1. OWASP Foundation. (2021). OWASP Top 10:2021, A03:2021-Injection. OWASP. Retrieved from https://owasp.org/Top10/A032021-Injection/. The "Attack Scenarios" section describes how an attacker can use SQL injection, such as ' OR '1'='1, to bypass authentication.
2. Amazon Web Services (AWS). (2023). SQL injection attack rule statement. AWS WAF, AWS Firewall Manager, and AWS Shield Advanced Developer Guide. Retrieved from https://docs.aws.amazon.com/waf/latest/developerguide/aws-managed-rule-groups-use-case-sql-db.html. This official vendor documentation details how WAFs detect SQLi by looking for patterns like "tautologies such as 1=1 and 0=0."
3. Kar, D., Pan, T. S., & Das, R. (2021). SQLi-IDS: A real-time SQL injection detection system using a hybrid deep neural network. Computers & Security, 108, 102341. https://doi.org/10.1016/j.cose.2021.102341. Section 2.1, "Tautology-based SQLIA," explicitly discusses the use of OR 1=1 as a primary technique for bypassing user authentication.
4. Zelle, D., & Kamin, S. (2019). Web Application Security. University of Illinois at Urbana-Champaign, CS 461/ECE 422 Course Notes. Retrieved from https://courses.engr.illinois.edu/cs461/sp2019/slides/Lecture20-WebAppSecurity.pdf. Slide 22 provides a canonical example of a tautology-based SQL injection attack using ' OR 1=1 -- to bypass a login form.
Question 18
Show Answer
A. Partial service outages: A service outage would likely affect both new and existing services and is typically a temporary, unscheduled event, not a consistent barrier to creating new resources.
B. Regional service availability: This would mean the VDI service is entirely unavailable in certain regions, preventing the creation of any instances, not just failing after some have been deployed.
D. Deprecation of functionality: Deprecation is the planned retirement of a service or feature. This would typically result in failures across all regions, not a location-specific issue.
1. Amazon Web Services (AWS) Documentation: "Service Quotas." AWS states, "Quotas, also referred to as limits in AWS, are the maximum number of resources that you can create in an AWS account... Many quotas are specific to an AWS Region." This confirms that resource limits are a regional constraint.
Source: AWS Documentation, "What Is Service Quotas?", https://docs.aws.amazon.com/servicequotas/latest/userguide/intro.html
2. Microsoft Azure Documentation: "Azure subscription and service limits, quotas, and constraints." The documentation details how quotas are applied per subscription and per region. For example, under "Virtual machine vCPU quotas," it states, "vCPU quotas are arranged in two tiers for each subscription, in each region."
Source: Microsoft Azure Documentation, "vCPU quotas," https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits#vcpu-quotas
3. Google Cloud Documentation: "Working with quotas." The documentation specifies the scope of quotas: "Quotas are enforced on a per-project, per-region, or per-zone basis." This directly supports the concept of location-specific resource creation failures due to limits.
Source: Google Cloud Documentation, "About quotas," https://cloud.google.com/docs/quota#aboutquotas
4. Armbrust, M., et al. (2010). A view of cloud computing. Communications of the ACM, 53(4), 50-58. This foundational academic paper on cloud computing discusses elasticity as a key feature but also notes the practical limitations imposed by providers to manage resources, which manifest as quotas. The paper implicitly supports the idea that resource provisioning is not infinite and is subject to provider-imposed controls.
DOI: https://doi.org/10.1145/1721654.1721672 (Section 3.1, "Elasticity and the Illusion of Infinite Resources")
Question 19
Show Answer
A. Snapshot: A snapshot is a point-in-time copy of a virtual machine or storage volume, primarily used for backup and recovery, not for deploying application code.
C. Serverless function: A serverless function is a piece of code that runs in a managed environment. While it is a method of deploying code, the container image is the packaging and delivery mechanism that ensures consistency across environments.
D. VM template: A VM template is a master copy of a virtual machine, including the full operating system. It is heavyweight and much slower to deploy than a container, making it inefficient for rapid code delivery.
1. National Institute of Standards and Technology (NIST). (2017). NIST Special Publication 800-190: Application Container Security Guide.
Section 2.1, "What are Application Containers?": "An application container is a portable image that can be used to create one or more instances of a container. The image includes an application, its libraries, and its dependencies... This allows the application to be abstracted from the host operating system, providing portability and consistency across different environments." (Page 7). This directly supports the use of container images for consistency across environments.
2. Armbrust, M., et al. (2009). Above the Clouds: A Berkeley View of Cloud Computing. University of California, Berkeley.
Section 3.1, "Virtual Machines": This paper discusses Virtual Machine Images (templates) as a way to bundle a full software stack. However, it highlights their size and startup time, contrasting with more modern, lightweight approaches. The principles laid out show why heavier VM templates are less efficient for rapid deployment compared to containers. (Page 4).
3. AWS Documentation. (n.d.). What is a Container?. Amazon Web Services.
"A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings." This official vendor documentation reinforces the role of container images in ensuring application portability and rapid deployment.
Question 20
Show Answer
A. Splicing: Splicing involves joining or connecting things. In the context of files, this would mean combining logs, which would create even larger files and worsen the storage problem.
C. Sampling: Log sampling involves collecting only a subset of log events. This is unsuitable for troubleshooting intermittent issues, as the specific events needed for diagnosis might not be captured.
D. Inspection: Log inspection is the process of analyzing or reviewing log data to identify issues. It is the action the engineer is performing, not a mechanism to manage log file storage.
---
1. National Institute of Standards and Technology (NIST). (2006). Guide to Computer Security Log Management (Special Publication 800-92).
Section 3.2.3, "Log Rotation and Archiving," Page 3-5: "Log rotation is the practice of closing a log file and opening a new one on a scheduled basis... Log rotation is performed primarily to keep log files from becoming too large. Once a log file is rotated, it is often compressed to save storage space." This document explicitly defines log rotation as the solution for managing large log files.
2. Red Hat. (2023). Red Hat Enterprise Linux 8: Configuring basic system settings.
Chapter 21, "Managing log files with logrotate," Section 21.1: "The logrotate utility allows the automatic rotation, compression, removal, and mailing of log files. Each log file can be handled daily, weekly, monthly, or when it grows too large." This official vendor documentation describes the exact mechanism and its purpose, which aligns with the scenario.
3. AWS Documentation. (2024). Amazon CloudWatch Logs User Guide.
Section: "Working with log groups and log streams - Log retention": "By default, logs are kept indefinitely and never expire. You can adjust the retention policy for each log group, keeping the indefinite retention, or choosing a retention period... CloudWatch Logs automatically deletes log events that are older than the retention setting." This describes the cloud-native equivalent of log rotation for managing log storage.
Question 21
Show Answer
A. Consistent: This term describes a state of data or a system (e.g., consistent backup), not a type of software version update.
B. Major: A major update involves significant, often backward-incompatible changes and is denoted by a change in the first version number (e.g., from 3.x.x to 4.x.x).
D. Ephemeral: This describes resources that are temporary or short-lived, such as ephemeral storage, and is unrelated to software update classifications.
---
1. Massachusetts Institute of Technology (MIT) OpenCourseWare. (2013). 6.170 Software Studio, Lecture 20: Versioning. MIT. In the discussion of versioning schemes, the lecture outlines the major.minor.micro (or patch) convention, where changes to the last number represent small bug fixes. The update from 3.4.0 to 3.4.1 fits the description of a micro/patch-level change, which falls under the general category of a minor, non-breaking update.
Reference: Section on "Semantic Versioning" in the lecture notes.
2. Microsoft Azure Documentation. (2023). REST API versioning. Microsoft Learn. While specific to APIs, the documentation explains the industry-standard concept of major and minor versions. It states, "A major version change indicates a breaking change... A minor version change is for non-breaking changes." The update from 3.4.0 to 3.4.1 is a non-breaking security fix, aligning with the definition of a minor change.
Reference: Section "Versioning the API".
3. Parnas, D. L. (1979). Designing Software for Ease of Extension and Contraction. IEEE Transactions on Software Engineering, SE-5(2), 128-138. This foundational academic paper discusses software modularity and evolution, implicitly supporting the idea of structured versioning where minor changes (like bug fixes) are handled with minimal disruption, distinct from major functional revisions.
DOI: https://doi.org/10.1109/TSE.1979.234170 (The principles discussed underpin modern versioning practices).
Question 22
Show Answer
A. Grafana: Grafana is a data visualization and monitoring tool. It queries data sources to create dashboards but does not perform data transformation on raw logs during ingestion.
B. Kibana: Kibana is the visualization layer for the Elastic Stack. It is used to explore and visualize data already stored in Elasticsearch, not to transform it beforehand.
C. Elasticsearch: Elasticsearch is a search and analytics engine for storing and indexing data. While it has ingest node capabilities, Logstash is the dedicated, more powerful tool for complex transformations.
1. Elasticsearch B.V. (2023). Logstash Reference [8.11] - How Logstash Works. Elastic.co. In the "Logstash processing pipeline" section, it is detailed that the "Filters" stage is where data is manipulated. The document states, "Filters are intermediary processing devices in the Logstash pipeline... you can derive structure from unstructured data." This directly supports Logstash's role in transforming data like flattening JSON. (Reference: https://www.elastic.co/guide/en/logstash/current/introduction.html#logstash-pipeline)
2. Elasticsearch B.V. (2023). Logstash Reference [8.11] - Json filter plugin. Elastic.co. The documentation for this specific filter states its purpose is to "parse JSON events." This is the first step in being able to access and flatten nested fields from a JSON log entry. (Reference: https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html)
3. Fox, A., & Patterson, D. (2016). CS 169: Software Engineering, Lecture 22: DevOps. University of California, Berkeley. The course materials describe the ELK stack (Elasticsearch, Logstash, Kibana), explicitly identifying Logstash as the component responsible for "log processing and parsing" before data is sent to Elasticsearch for indexing and Kibana for visualization. (Reference: Slide 22-23, "Logging and Monitoring with ELK," available via Berkeley's course archives).
Question 23
Show Answer
A. CWE: The Common Weakness Enumeration (CWE) is a dictionary of common software and hardware weakness types. It classifies the type of flaw, not the severity of a specific vulnerability instance.
C. CWSS: The Common Weakness Scoring System (CWSS) scores the severity of weaknesses (CWEs) in a general context, often during development, not the risk of a specific vulnerability in a deployed product.
D. CVE: Common Vulnerabilities and Exposures (CVE) provides a unique identification number for a specific, publicly known vulnerability. The CVE entry contains a CVSS score but is not the system used to calculate it.
1. FIRST.org, Inc. (2019). Common Vulnerability Scoring System v3.1: Specification Document. Section 1, "Introduction". "The Common Vulnerability Scoring System (CVSS) is an open framework for communicating the characteristics and severity of software vulnerabilities."
Available at: https://www.first.org/cvss/v3.1/specification-document
2. National Institute of Standards and Technology (NIST). (n.d.). NVD - CVSS v3 Calculator. National Vulnerability Database. The NVD, a primary source for vulnerability data, uses CVSS to score vulnerabilities. The glossary defines CVE as an identifier and CVSS as the scoring system.
Reference: The NVD's use and explanation of CVSS scores for CVE entries, such as CVE-2021-44228.
3. The MITRE Corporation. (2023). About CVE. The CVE Program. "CVE is a list of entriesโeach containing an identification number... for publicly known cybersecurity vulnerabilities." This clarifies that CVE is an identifier, not a scoring methodology.
Available at: https://www.cve.org/About/Overview
4. The MITRE Corporation. (2023). About CWE. Common Weakness Enumeration. "CWE is a community-developed list of common software and hardware weakness types that have security ramifications." This defines CWE as a classification system for types of flaws.
Available at: https://cwe.mitre.org/about/index.html
Question 24
Show Answer
B. Using watermarked images: Watermarking is used to embed ownership or tracking information into a digital asset; it does not provide any security hardening or vulnerability mitigation.
C. Using digitally signed images: Digital signatures verify the image's authenticity (who created it) and integrity (it has not been tampered with). However, a signed image can still contain vulnerabilities.
D. Using images that have an application firewall: An application firewall is a runtime security control that inspects network traffic. It is not a component built into a container image itself.
1. National Institute of Standards and Technology (NIST). (2017). Special Publication (SP) 800-190, Application Container Security Guide.
Section 4.1, "Image Hardening," states: "Organizations should harden images by modeling them after security configuration guidance from trusted sources, such as the Center for Internet Security (CIS) Benchmarks or the Defense Information Systems Agency (DISA) Security Technical Implementation Guides (STIGs)." This directly supports using hardened images to address vulnerabilities.
2. Google Cloud. (n.d.). Security best practices for building containers.
In the section "Use a minimal base image," the documentation advises, "Using a hardened base image that is maintained by the image's distributor can also provide a good starting point." This aligns with the principle of using pre-secured images like those hardened to CIS standards.
3. Amazon Web Services (AWS). (n.d.). Security Best Practices for Amazon Elastic Kubernetes Service (EKS).
Under the "Instance security" section, AWS recommends using Amazon EKS optimized AMIs, which are configured for security. The document states, "You can also create your own custom AMI using a hardened operating system such as CIS." This principle of using hardened base images extends from the host OS to the container images running on it.
Question 25

Show Answer
B. Option B: This option uses the -gt (greater than) operator, which would incorrectly execute the command only if the volume size was already greater than 100GB.
C. Option C: This option uses a while loop. A while loop repeatedly executes the command as long as the condition is true, which is inappropriate for a single-action task and could cause an infinite loop.
D. Option D: This option is incorrect for two reasons: it uses the wrong comparison operator (-gt) and an inappropriate control structure (while loop) for a single conditional action.
1. GNU Bash Reference Manual. (2022). 6.4 Bash Conditional Expressions. Free Software Foundation. Retrieved from https://www.gnu.org/software/bash/manual/htmlnode/Bash-Conditional-Expressions.html.
This official documentation details the syntax for conditional expressions in Bash. It specifies that -lt is the operator to be used for numerical "is less than" comparisons within a test construct ([ ... ]).
2. MIT OpenCourseWare. (2020). The Missing Semester of Your CS Education, Lecture 2: Shell Tools and Scripting. Massachusetts Institute of Technology. Retrieved from https://missing.csail.mit.edu/2020/shell-tools/.
Under the "Shell Scripting" section, the courseware explains the use of if, then, else, fi constructs for conditional logic and demonstrates the use of test operators like -lt for numerical comparisons, confirming the structure in Option A is correct.
Question 26
Show Answer
A. Network traffic is balanced between the main site and hot site servers.
This describes an active-active configuration for load balancing or high availability, which is one possible implementation of a hot site, but not its defining characteristic. A hot site can also be active-passive (standby).
B. Offline server backups are replicated hourly from the main site.
This process is more characteristic of a warm site. A hot site typically uses near real-time data replication (synchronous or asynchronous) rather than less frequent, offline backups, to achieve a much lower RPO.
D. Which of the following best describes a characteristic of a hot site?
This is a repetition of the question stem and not a valid answer choice.
1. National Institute of Standards and Technology (NIST) Special Publication 800-34 Rev. 1, Contingency Planning Guide for Federal Information Systems.
Section 4.3.2, Alternate Site: "A hot site is a fully configured alternate processing site, ready to be occupied and begin operations within a few hours of a disaster declaration. Hot sites include all necessary hardware and up-to-date software, data, and supplies." This supports the concept of replicated, online, and ready systems.
2. Amazon Web Services (AWS), Disaster Recovery (DR) Architecture on AWS, Part III: Pilot Light and Warm Standby.
Warm Standby Section, Paragraph 1: The "Warm Standby" approach, which is a type of hot site, is described as having "a scaled-down but fully functional copy of your production environment" always running in another region. This aligns with the "online status" of replicated servers.
3. Microsoft Azure, Disaster recovery and high availability for Azure applications.
Section: Active-passive with hot standby: "A hot standby is a secondary region where you have deployed all your application's components and it is ready to receive production traffic... The secondary region is active and ready to receive traffic." This directly corroborates that a hot site has replicated servers in an online, ready state.
Question 27
Show Answer
A. Object: Object storage is a persistent storage service external to the container. Data is managed as objects and is not lost when a container restarts.
B. Persistent volume: A persistent volume is an abstraction for a piece of storage that exists independently of a container's or pod's lifecycle, explicitly designed to preserve data.
D. Block: Block storage provides persistent volumes that can be attached to containers. The data on these volumes is independent of the container's lifecycle and survives restarts.
---
1. Kubernetes Documentation, "Volumes". The official Kubernetes documentation describes ephemeral volume types like emptyDir. It states, "When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever." This confirms that data in this type of volume is lost when the container's pod is terminated. (Source: Kubernetes.io, Concepts > Storage > Volumes, Section: emptyDir).
2. Docker Documentation, "Manage data in Docker". The official Docker documentation explains that data not stored in a volume is written to the container's writable layer. It clarifies, "The data doesn't persist when that container is no longer running, and it can be difficult to get the data out of the container if another process needs it." (Source: Docker Docs, Storage > Volumes > "Manage data in Docker").
3. Red Hat Official Documentation, "Understanding container storage". This vendor documentation distinguishes between ephemeral and persistent storage. For ephemeral storage, it notes, "The storage is tightly coupled with the containerโs life cycle. If the container crashes or is stopped, the storage is lost." (Source: Red Hat Customer Portal, OpenShift Container Platform 4.10 > Storage > "Understanding container storage", Section: "Ephemeral storage").
4. University of California, Berkeley, "CS 162: Operating Systems and System Programming", Lecture 19: "Virtual Machines, Containers, and Cloud Computing". Course materials often describe the container file system as a series of read-only layers with a final writable layer for the specific container. This top writable layer is ephemeral and is discarded when the container is destroyed. (Reference concept covered in typical advanced OS/Cloud Computing university curricula).
Question 28
Show Answer
B. Blue-green: This model requires a complete duplicate of the production environment, which is not cost-effective. It also involves switching all traffic at once, not targeting a subset.
C. Rolling: This deployment updates instances incrementally but typically does not target a specific user subset; traffic is usually distributed randomly across old and new versions during the update.
D. In-place: This method updates the application on the existing infrastructure, which affects all users simultaneously and typically involves downtime, failing to meet the targeting requirement.
1. Google Cloud Architecture Center. (2023). Application deployment and testing strategies. "In a canary test, you roll out a change to a small subset of users. This approach lets you test the change in production with real user traffic without affecting all of your users." This document contrasts canary with blue-green and rolling deployments.
2. AWS Prescriptive Guidance. (2023). Implement a canary deployment strategy. "A canary release is a deployment strategy that releases an application or service increment to a small subset of users. This strategy helps you test a new version of your application in a production environment with real user traffic."
3. Microsoft Azure Documentation. (2023). Deployment strategies. "Canary: Deploy changes to a small set of servers to start. Route a specific percentage of users to them. Then, roll out to more servers while you monitor performance." This explicitly mentions routing a subset of users.
Question 29
Show Answer
A. Cloning creates a point-in-time copy of a virtual machine. It is a provisioning or backup method, not a mechanism for real-time high availability or resilience.
C. Hardware passthrough is a virtualization technique that grants a virtual machine direct access to physical hardware, primarily for performance, not for multi-region availability.
D. A stand-alone container is, by definition, a single instance. It represents a single point of failure and lacks the inherent redundancy needed for high availability and resilience.
1. AWS Well-Architected Framework, Reliability Pillar (July 31, 2023). This official AWS documentation details strategies for achieving high availability. In the section "REL 4: How do you design your workload architecture to withstand component failures?", it discusses using redundant components across multiple locations (Availability Zones and Regions). Clustering is a core implementation of this principle. The document states, "Deploy the workload to multiple locations... For example, a cluster with an odd number of instances can withstand the failure of a single instance." (p. 31).
2. Google Cloud. (2023). Application deployment and testing strategies. This official Google Cloud documentation outlines architectural patterns for reliability. In the section on "Multi-region deployment," it explains that distributing an application across multiple regions improves availability and reduces latency for users by serving them from the nearest region, a key feature of multi-region clustering.
3. Armbrust, M., et al. (2009). Above the Clouds: A Berkeley View of Cloud Computing. University of California, Berkeley, RAD Lab Technical Report No. UCB/EECS-2009-28. This foundational academic paper discusses high availability as a key advantage of cloud computing. It states, "Cloud Computing must provide the illusion of infinite computing resources available on demand... and build on fault-tolerant hardware and software, using techniques like clusters and automatic failover, to maintain availability despite failures." (Section 3.1, p. 4).
Question 30
Show Answer
B. Browsing capabilities are a user interface feature of the repository service, not the core distinction, which is rooted in access control.
C. The choice of software license (open-source vs. proprietary) is independent of the repository's visibility setting; both license types can exist in either repository type.
D. A Dockerfile is used to build an image and is not typically stored within the image itself; its inspection is related to source code access, not the repository type.
1. Docker Documentation, "Repositories": "A repository can be public or private. Anyone can view and pull images from a public repository. You need permissions to pull images from a private repository. Private repositories are a great way to manage images you don't want to share publicly, such as images that contain proprietary source code or application data." (Reference: Docker Inc., Docker Docs, "Repositories", Section: "Public and private repositories").
2. Amazon Web Services (AWS) Documentation, "Amazon ECR User Guide": In its description of private repositories, the guide states, "...access can be controlled using both repository policies and IAM policies." For public repositories, it notes, "Anyone can browse and pull images from a public repository." This directly contrasts the access models, highlighting authorization for private and open access for public. (Reference: AWS, Amazon ECR User Guide, "Amazon ECR private repositories" and "Amazon ECR public repositories" sections).
3. Google Cloud Documentation, "Artifact Registry overview": "You can control access to your repositories by granting permissions to principals... Artifact Registry uses Identity and Access Management (IAM) to manage permissions." It further explains how to make a repository public by granting the reader role to allUsers, reinforcing that the default state is private and access is managed via authorization. (Reference: Google Cloud, Artifact Registry Documentation, "Configuring access control").