Free Practice Test

Free Cloud Architect Practice Exam – 2025 Updated

Prepare smarter for your Cloud Architect exam with our free, accurate, and 2025-updated questions.

At Cert Empire, we are committed to providing the best and the latest exam questions to the aspiring students who are preparing for Google Cloud Architect Exam. To help the students prepare better, we have made sections of our Cloud Architect exam preparation resources free for all. You can practice as much as you can with Free Cloud Architect Practice Test.

Question 1

A lead software engineer tells you that his new application design uses websockets and HTTP sessions that are not distributed across the web servers. You want to help him ensure his application will run property on Google Cloud Platform. What should you do?
Options
A: Help the engineer to convert his websocket code to use HTTP streaming.
B: Review the encryption requirements for websocket connections with the security team.
C: Meet with the cloud operations team and the engineer to discuss load balancer options.
D: Help the engineer redesign the application to use a distributed user session service that does not rely on websockets and HTTP sessions.
Show Answer
Correct Answer:
Meet with the cloud operations team and the engineer to discuss load balancer options.
Explanation
The application's architecture has two specific requirements that must be addressed at the infrastructure level: support for the WebSocket protocol and a mechanism to handle non-distributed HTTP sessions. The latter requirement implies the need for session affinity (or "sticky sessions"), ensuring that requests from a specific user are consistently routed to the same backend server. Google Cloud Load Balancing services are designed to manage this type of traffic distribution. Specifically, services like the External HTTP(S) Load Balancer natively support the WebSocket protocol and provide multiple options for configuring session affinity (e.g., based on generated cookies, client IP, or HTTP headers). Therefore, the most direct and appropriate action is to discuss these load balancer options to find a configuration that supports the application's existing design without requiring a code rewrite.
Why Incorrect Options are Wrong

A. This suggests an unnecessary application redesign. Google Cloud's load balancers provide native support for websockets, so converting the code to a different protocol is not required.

B. While security is important, reviewing encryption does not solve the core architectural problem of routing traffic correctly to support websockets and non-distributed sessions.

D. This proposes a significant and costly application redesign as the first step. A Cloud Architect should first seek to support the existing application requirements with appropriate infrastructure before recommending a complete rewrite.

References

1. Google Cloud Documentation, "External HTTP(S) Load Balancer overview": This document explicitly states the load balancer's capabilities. Under the "Features" section, it lists "WebSocket support." The documentation explains: "Google Cloud Armor and Cloud CDN can be used with WebSockets. The WebSocket protocol... provides a full-duplex communication channel between a client and a server. The channel is initiated from an HTTP(S) request. The External HTTP(S) Load Balancer has native support for the WebSocket protocol."

2. Google Cloud Documentation, "Session affinity": This page details how to configure session affinity for various load balancers. For the Global External HTTP(S) Load Balancer, it states: "Session affinity sends all requests from the same client to the same virtual machine (VM) instance or endpoint... This is useful for applications that require stateful sessions." It then describes the different types, including Generated cookie affinity, Header field affinity, and HTTP cookie affinity, which directly address the "non-distributed sessions" requirement.

3. Google Cloud Architecture Framework, "System design pillar": This framework emphasizes selecting the right Google Cloud products and services to meet design requirements. The "Networking and connectivity principles" section guides architects to "Choose the right load balancer for your needs." This aligns with option C, which involves evaluating load balancer options to fit the application's specific websocket and session state requirements.

Question 2

You want to enable your running Google Container Engine cluster to scale as demand for your application changes. What should you do?
Options
A: Add additional nodes to your Container Engine cluster using the following command: gcloud container clusters resize CLUSTER_NAME --size 10
B: Add a tag to the instances in the cluster with the following command: gcloud compute instances add-tags INSTANCE --tags enable --autoscaling max-nodes-10
C: Update the existing Container Engine cluster with the following command: gcloud alpha container clusters update mycluster --enable-autoscaling --min-nodes=1 --max- nodes=10
D: Create a new Container Engine cluster with the following command: gcloud alpha container clusters create mycluster --enable-autocaling --min-nodes=1 --max-nodes=10 and redeploy your application.
Show Answer
Correct Answer:
Update the existing Container Engine cluster with the following command: gcloud alpha container clusters update mycluster --enable-autoscaling --min-nodes=1 --max- nodes=10
Explanation
The most direct and appropriate method to enable automatic scaling on an existing, running Google Kubernetes Engine (GKE) cluster is to update its configuration. The gcloud container clusters update command is designed for this purpose. Using the --enable-autoscaling flag along with --min-nodes and --max-nodes parameters allows the cluster autoscaler to be activated and configured with the desired scaling boundaries. This modifies the cluster in-place without requiring workload migration.
Why Incorrect Options are Wrong

A: The resize command performs a one-time, manual scaling operation to a fixed number of nodes. It does not enable the cluster to scale automatically based on workload demand.

B: Adding tags to individual Compute Engine instances is used for networking rules or organization, not for enabling the GKE cluster autoscaler, which is a managed feature of the cluster itself.

D: This command creates an entirely new cluster. While it correctly enables autoscaling, it does not modify the running cluster as requested and would require a disruptive migration of all applications.

References

1. Google Cloud Documentation - Autoscaling a cluster: The official documentation explicitly provides the command for enabling cluster autoscaling on an existing cluster: gcloud container clusters update CLUSTERNAME --enable-autoscaling --min-nodes=MINNODES --max-nodes=MAXNODES. This directly supports option C as the correct procedure. (See "Enabling cluster autoscaling for an existing cluster" section).

Source: Google Cloud, "Autoscaling a cluster", cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler.

2. Google Cloud SDK Documentation - gcloud container clusters update: The reference for this command confirms its purpose is to "Update settings for a cluster" and lists --enable-autoscaling, --max-nodes, and --min-nodes as valid flags for managing the autoscaler.

Source: Google Cloud SDK, "gcloud container clusters update", cloud.google.com/sdk/gcloud/reference/container/clusters/update.

3. Google Cloud SDK Documentation - gcloud container clusters resize: This documentation clarifies that the resize command is for manual scaling: "This command is used for manual scaling. You can use this command to increase or decrease the number of nodes in a cluster." This confirms why option A is incorrect for automatic scaling.

Source: Google Cloud SDK, "gcloud container clusters resize", cloud.google.com/sdk/gcloud/reference/container/clusters/resize.

Question 3

You deploy your custom java application to google app engine. It fails to deploy and gives you the following stack trace: Google PROFESSIONAL CLOUD ARCHITECT exam question
Options
A: Recompile the CLoakedServlet class using and MD5 hash instead of SHA1
B: Digitally sign all of your JAR files and redeploy your application.
C: Upload missing JAR files and redeploy your application
Show Answer
Correct Answer:
Digitally sign all of your JAR files and redeploy your application.
Explanation
The stack trace indicates a java.lang.SecurityException because the SHA1 digest of a JAR file does not match the expected digest listed in the application's deployment descriptor (META-INF/application.xml). This is a file integrity verification failure. The application server, in this case, the App Engine runtime, is unable to verify that the JAR file is authentic and has not been tampered with. Digitally signing the JAR files with a trusted certificate creates a manifest with correct digests and a signature to guarantee their integrity. Redeploying with properly signed JARs will allow the runtime to successfully verify the files, resolving the security exception.
Why Incorrect Options are Wrong

A. The error is with the javax.servlet-api-3.0.1.jar file's integrity, not a custom class. Changing the hash algorithm for a single class is irrelevant to the JAR verification process.

C. The error is a digest mismatch (does not match), not a ClassNotFoundException. This confirms the file is present but has failed an integrity check, rather than being missing.

References

1. Oracle Java Documentation, "Signing and Verifying JAR Files": This document explains the purpose of JAR signing. It states, "You can sign JAR files to ensure their integrity and authenticity... When a signed JAR file is loaded, the Java runtime can verify the signature to ensure that the file's contents have not been changed since it was signed." The SecurityException in the question is a direct result of this verification failing. (Source: Oracle, JDK 8 Documentation, The Java Tutorials, Deployment, Signing and Verifying JAR Files).

2. Google Cloud Documentation, "Java 8 Runtime Environment": The error log references the WEB-INF/lib/ directory, which is a standard part of the required application directory structure for Java applications on App Engine. This confirms the context is a standard Java web application deployment where such integrity checks are common. (Source: Google Cloud, App Engine standard environment for Java 8 documentation, "The Java 8 Runtime Environment", Section: "Organizing your files").

3. Princeton University, COS 432: Information Security, Lecture 18: Java Security: This courseware discusses the Java security model, including the sandbox, SecurityManager, and code signing. It explains that the SecurityManager is responsible for throwing a SecurityException when a security policy is violated, such as when code integrity cannot be verified via its signature. (Source: Princeton University, Department of Computer Science, COS 432, Lecture 18, Slides 15-18 on "Code Signing").

Question 4

You are designing a mobile chat application. You want to ensure people cannot spoof chat messages, by providing a message were sent by a specific user. What should you do?

Options
A:

A. Tag messages client side with the originating user identifier and the destination user.

B:

B. Encrypt the message client side using block-based encryption with a shared key.

C:

C. Use public key infrastructure (PKI) to encrypt the message client side using the originating user's private key.

D:

D. Use a trusted certificate authority to enable SSL connectivity between the client application and the server.

Show Answer
Correct Answer:
C. Use public key infrastructure (PKI) to encrypt the message client side using the originating user's private key.
Explanation
The question requires a solution to prevent message spoofing by proving a message was sent by a specific user. This is a classic use case for digital signatures, which provide authenticity and non-repudiation. The process described in option C, using Public Key Infrastructure (PKI) to encrypt with the originating user's private key, is the definition of creating a digital signature. The recipient can then use the sender's public key to verify the signature, confirming that only the holder of the private key could have sent the message. This cryptographically proves the sender's identity and prevents them from denying they sent the message.
Why Incorrect Options are Wrong

A. Tagging messages with a user identifier on the client side is insecure metadata. A malicious user can easily modify the client application to forge this tag.

B. Shared key (symmetric) encryption provides confidentiality, ensuring only those with the key can read the message. It does not prove origin, as anyone with the shared key could have created the message.

D. SSL/TLS secures the communication channel between the client and the server (data in transit). It does not cryptographically sign the individual messages to prove the user's identity to other chat participants.

References

1. Google Cloud Documentation, Cloud Key Management Service, "Digital signatures": "Digital signatures are commonly used to verify the integrity and authenticity of data. For example, you can use digital signatures to verify that a binary was released by a specific developer... A private key is used to create a digital signature, and the corresponding public key is used to validate the signature." This directly supports the mechanism described in option C for proving origin.

2. Google Cloud Documentation, Cloud Key Management Service, "Asymmetric encryption": This document distinguishes between symmetric and asymmetric keys, stating, "Asymmetric keys can be used for either asymmetric encryption or for creating digital signatures." This clarifies that the PKI approach (asymmetric keys) is the correct tool for signatures, unlike the symmetric approach in option B.

3. MIT OpenCourseWare, 6.857 Computer and Network Security, Fall 2017, Lecture 8: Public-Key Cryptography: The lecture notes state that a digital signature scheme provides "(1) Authentication (of origin), (2) Integrity (of message), (3) Non-repudiation (by origin)." The process is defined as Sign(SK, M) where SK is the secret (private) key, which aligns perfectly with option C's methodology.

4. Google Cloud Security Whitepaper, "Encryption in Transit in Google Cloud": This paper details how Google Cloud uses TLS to secure data in transit (Section: "Default encryption in transit"). This supports the reasoning for why option D is incorrect, as TLS secures the transport layer between two points (e.g., client and server), not the authenticity of the application-layer message itself for end-to-end verification between users.

Question 5

You created a pipeline that can deploy your source code changes to your infrastructure in instance groups for self healing. One of the changes negatively affects your key performance indicator. You are not sure how to fix it and investigation could take up to a week. What should you do?

Options
A:

A. Log in to a server, and iterate a fix locally

B:

B. Change the instance group template to the previous one, and delete all instances.

C:

C. Revert the source code change and rerun the deployment pipeline

D:

D. Log into the servers with the bad code change, and swap in the previous code

Show Answer
Correct Answer:
C. Revert the source code change and rerun the deployment pipeline
Explanation
The most effective and reliable strategy is to treat the rollback as a new deployment. By reverting the problematic change in the source code repository, you maintain a clear and auditable history of the application's state. Rerunning the deployment pipeline leverages your existing automation to build and deploy the last known-good version of the code. This ensures the change is applied consistently across all instances in the self-healing group and aligns with DevOps and Continuous Integration/Continuous Deployment (CI/CD) best practices, where the source control system is the single source of truth.
Why Incorrect Options are Wrong

A. Logging in to a server to iterate a fix is a manual process that is not scalable and will be undone when the instance group's self-healing mechanism replaces the instance.

B. While changing the instance template would work, it's a manual infrastructure-level intervention. The root cause is the application code, and the best practice is to fix the source of truth (the code) and let the pipeline manage the infrastructure changes.

D. Manually swapping code on live servers is an anti-pattern. It is not repeatable, not auditable, and any changes will be lost during self-healing events or subsequent automated deployments.

References

1. Google Cloud Documentation, DevOps tech: Continuous delivery: "A key goal of continuous delivery is to make your release process a low-risk event that you can perform at any time and on demand. ... Because you are deploying smaller changes, you can more easily pinpoint and address bugs and roll back changes if necessary." This supports the principle of rolling back a problematic change through the established process.

Source: Google Cloud, "DevOps tech: Continuous delivery", Section: "What is continuous delivery?".

2. Google, Site Reliability Engineering, Chapter 8 - Release Engineering: "A key feature of our release system is the ability to quickly and safely roll back a release that is found to be bad. ... Rollbacks use the same infrastructure as rollouts, but in reverse." This highlights the best practice of using the same automated system (the pipeline) for rollbacks as for deployments, which is achieved by reverting the code and re-running the pipeline.

Source: Beyer, B., Jones, C., Petoff, J., & Murphy, N. R. (2016). Site Reliability Engineering: How Google Runs Production Systems. O'Reilly Media. Chapter 8, Section: "Rollout Policies".

3. Google Cloud Documentation, Managed instance groups (MIGs): "A managed instance group (MIG) ... maintains high availability of your apps by proactively keeping your VMs (instances) running. If a VM in the group stops, crashes, or is deleted... the MIG automatically recreates it in accordance with the group's instance template". This confirms that any manual changes made directly to an instance (as suggested in options A and D) will be lost.

Source: Google Cloud, "Managed instance groups (MIGs)", Section: "High availability".

Question 6

Your organization wants to control IAM policies for different departments independently, but centrally. Which approach should you take?
Options
A: Multiple Organizations with multiple Folders
B: Multiple Organizations, one for each department
C: A single Organization with Folder for each department
D: A single Organization with multiple projects, each with a central owner
Show Answer
Correct Answer:
A single Organization with Folder for each department
Explanation
The Google Cloud resource hierarchy is designed for this exact use case. A single Organization node serves as the root, enabling centralized governance and application of organization-wide IAM policies. Folders are used to group resources, such as projects, that share common IAM policies. By creating a Folder for each department, you can delegate administrative rights to departmental teams, allowing them to manage their own projects and resources independently within their Folder. This structure provides both the central control required by the organization and the departmental autonomy needed for efficient operations.
Why Incorrect Options are Wrong

A. Multiple Organizations with multiple Folders: Using multiple Organizations breaks the principle of central control, as each Organization is a distinct root entity with its own separate policies.

B. Multiple Organizations, one for each department: This approach completely fragments governance, creating isolated silos for each department and making central IAM policy enforcement impossible.

D. A single Organization with multiple projects, each with a central owner: While this provides central control, it lacks the intermediate grouping layer (Folders) for departments, making policy management at scale inefficient and difficult to delegate.

References

1. Google Cloud Documentation, Resource Manager, "Cloud Platform resource hierarchy": "Folders are an additional grouping mechanism on top of projects... Folders are commonly used to model different departments, teams, or legal entities within a company. For example, a first level of folders could represent the major departments in your organization." This directly supports using folders to represent departments.

2. Google Cloud Documentation, Resource Manager, "Creating and managing folders", Section: "Using folders for access control": "You can use folders to isolate resources for different departments... Access to resources can be limited by department by assigning IAM roles at the folder level. All projects and folders within a parent folder inherit the IAM policies of that folder." This confirms that folders are the correct tool for delegating departmental control.

3. Google Cloud Documentation, IAM, "Policy inheritance": "The resource hierarchy for policy evaluation includes the organization, folders, projects, and resources... The child resource inherits the parent's policy." This explains the mechanism for central control from the Organization node downwards.

4. Google Cloud Architecture Framework, "Design a resource hierarchy for your Google Cloud landing zone": In the "Folder structure" section, a common pattern recommended is: "A folder for each department, such as Dept-A and Dept-B." This establishes the chosen answer as a documented best practice.

Question 7

A recent audit that a new network was created in Your GCP project. In this network, a GCE instance has an SSH port open the world. You want to discover this network's origin. What should you do?
Options
A: Search for Create VM entry in the Stackdriver alerting console.
B: Navigate to the Activity page in the Home section. Set category to Data Access and search for Create VM entry.
C: In the logging section of the console, specify GCE Network as the logging section. Search for the Create Insert entry.
D: Connect to the GCE instance using project SSH Keys. Identify previous logins in system logs, and match these with the project owners list.
Show Answer
Correct Answer:
In the logging section of the console, specify GCE Network as the logging section. Search for the Create Insert entry.
Explanation
To investigate the origin of a newly created Google Cloud resource, the correct tool is Cloud Logging, which captures Admin Activity audit logs. These logs record API calls that modify the configuration or metadata of resources, such as creating a VPC network. By filtering the logs in the Logs Explorer for the resource type gcenetwork and the API method that creates the network (compute.networks.insert), you can pinpoint the exact log entry. This entry will contain the identity of the principal (user, group, or service account) that made the call, the source IP address, and the timestamp, thereby revealing the network's origin.
Why Incorrect Options are Wrong

A. The Cloud Monitoring alerting console is used for configuring and viewing alerts based on metrics, logs, and uptime checks, not for retrospectively searching historical audit logs for specific events.

B. Resource creation events are captured in Admin Activity logs, not Data Access logs. Data Access logs track when data within a resource is read or written, which is not relevant here.

D. Checking system logs on the GCE instance would only show who has logged into the virtual machine, not who created the underlying network or the instance itself. The creator may have never accessed the instance.

References

1. Cloud Audit Logs Overview: "Admin Activity audit logs contain log entries for API calls or other actions that modify the configuration or metadata of resources. For example, these logs record when users create VM instances or change Identity and Access Management permissions." This confirms that creating a network is an Admin Activity.

Source: Google Cloud Documentation, "Cloud Audit Logs", Overview section.

2. Querying Logs in Logs Explorer: The Logs Explorer allows for building queries to find specific log entries. A query to find network creation events would look like: resource.type="gcenetwork" and protoPayload.methodName="v1.compute.networks.insert". This demonstrates the method described in the correct answer.

Source: Google Cloud Documentation, "Build queries in the Logs Explorer", Query for a resource type and log name section.

3. Compute Engine Audited Operations: The official documentation lists the specific API methods that are logged. For creating a network, the method is v1.compute.networks.insert. This validates that searching for an "insert" entry for a GCE Network is the correct procedure.

Source: Google Cloud Documentation, "Compute Engine audit logging information", Audited operations table.

Question 8

As part of implementing their disaster recovery plan, your company is trying to replicate their production MySQL database from their private data center to their GCP project using a Google Cloud VPN connection. They are experiencing latency issues and a small amount of packet loss that is disrupting the replication. What should they do?
Options
A: Configure their replication to use UDP.
B: Configure a Google Cloud Dedicated Interconnect.
C: Restore their database daily using Google Cloud SQL.
D: Add additional VPN connections and load balance them.
E: Send the replicated transaction to Google Cloud Pub/Sub.
Show Answer
Correct Answer:
Configure a Google Cloud Dedicated Interconnect.
Explanation
The core issue is the use of a Cloud VPN connection over the public internet for a latency-sensitive and loss-intolerant workload like production database replication. The public internet provides "best-effort" delivery, which leads to unpredictable latency and packet loss, disrupting the replication process. Google Cloud Dedicated Interconnect establishes a private, direct physical connection to Google's network, bypassing the public internet. This provides a stable, low-latency, and high-bandwidth connection with a Service Level Agreement (SLA), making it the appropriate solution for reliable, enterprise-grade disaster recovery scenarios.
Why Incorrect Options are Wrong

A. Configure their replication to use UDP.

UDP is an unreliable, connectionless protocol that does not guarantee packet delivery or order. Database replication requires the guaranteed, in-order delivery provided by TCP to prevent data corruption.

C. Restore their database daily using Google Cloud SQL.

This is a backup-and-restore strategy, not replication. A daily restore implies a Recovery Point Objective (RPO) of up to 24 hours, which is typically unacceptable for a production database DR plan.

D. Add additional VPN connections and load balance them.

While this can increase aggregate throughput and add redundancy, all connections still traverse the unreliable public internet. It does not solve the fundamental problems of inconsistent latency and packet loss for a single replication stream.

E. Send the replicated transaction to Google Cloud Pub/Sub.

This introduces significant architectural complexity, requiring custom producers and consumers. It does not address the underlying network instability between the on-premises data center and GCP, which is the root cause of the problem.

References

1. Google Cloud Documentation, "Choosing a Network Connectivity product": This document directly compares Cloud VPN and Cloud Interconnect. It states, "Cloud Interconnect provides low latency, high availability connections that enable you to reliably transfer data between your on-premises and Virtual Private Cloud (VPC) networks." It also notes for Cloud VPN, "Because this connection traverses the internet, its performance...can be inconsistent." This supports choosing Interconnect (B) over VPN (D).

2. Google Cloud Documentation, "Dedicated Interconnect overview": This page highlights the key benefits of Dedicated Interconnect: "Lower latency. Traffic between your on-premises and VPC networks doesn't touch the public internet. Instead, it travels over a dedicated connection with lower latency." This directly addresses the problem described in the question.

3. MySQL 8.0 Reference Manual, "Section 19.1.1 Replication Implementation Overview": The MySQL documentation describes the replication process where a replica's I/O thread connects to the source server over the network to read the binary log. This connection relies on a stable, reliable transport protocol like TCP, making UDP (A) an unsuitable choice.

4. Kurose, J. F., & Ross, K. W. (2017). Computer Networking: A Top-Down Approach (7th ed.). Pearson. In Chapter 3, Section 3.3, UDP is described as providing an "unreliable data transfer service," while Section 3.5 describes TCP's "reliable data transfer" service, reinforcing why UDP is incorrect for database replication.

Question 9

Your customer support tool logs all email and chat conversations to Cloud Bigtable for retention and analysis. What is the recommended approach for sanitizing this data of personally identifiable information or payment card information before initial storage?
Options
A: Hash all data using SHA256
B: Encrypt all data using elliptic curve cryptography
C: De-identify the data with the Cloud Data Loss Prevention API
D: Use regular expressions to find and redact phone numbers, email addresses, and credit card numbers
Show Answer
Correct Answer:
De-identify the data with the Cloud Data Loss Prevention API
Explanation
The most appropriate and recommended approach is to use the Cloud Data Loss Prevention (DLP) API. This is a fully managed Google Cloud service specifically designed to discover, classify, and de-identify sensitive data like Personally Identifiable Information (PII) and Payment Card Information (PCI) within unstructured text streams or storage. Cloud DLP uses pre-trained classifiers (infoType detectors) to accurately identify sensitive data and offers various transformation techniques such as redaction, masking, and tokenization. This allows for the sanitization of sensitive information while preserving the utility of the non-sensitive data for analysis, directly addressing the question's requirements before the data is stored in Bigtable.
Why Incorrect Options are Wrong

A. Hash all data using SHA256: Hashing the entire log entry would render the non-sensitive data unusable for the stated purpose of retention and analysis, as it's a one-way, irreversible process.

B. Encrypt all data using elliptic curve cryptography: Encryption protects data but does not sanitize or de-identify it. The sensitive information still exists in an encrypted form and is not removed or transformed for analysis.

D. Use regular expressions to find and redact phone numbers, email addresses, and credit card numbers: This approach is brittle, difficult to maintain, and prone to errors. It cannot reliably detect all formats of sensitive data (e.g., international phone numbers, names) or validate them (e.g., credit card checksums), unlike the managed and sophisticated detectors in Cloud DLP.

References

1. Cloud Data Loss Prevention Documentation - De-identification of sensitive data: "You can use Cloud DLP to de-identify sensitive data in your content. De-identification is the process of removing identifying information from data. Its goal is to allow the sharing and use of personal data while protecting privacy." This page details methods like redaction and masking, which are ideal for the scenario.

Source: Google Cloud Documentation, "De-identification of sensitive data".

2. Cloud Data Loss Prevention Documentation - InfoType detector reference: This document lists the extensive built-in detectors for PII and PCI data, such as CREDITCARDNUMBER, EMAILADDRESS, and PHONENUMBER. This demonstrates why Cloud DLP is superior to custom regular expressions.

Source: Google Cloud Documentation, "InfoType detector reference".

3. Google Cloud Architecture Framework - Security, privacy, and compliance: "Use Cloud DLP to discover, classify, and redact sensitive data. For example, you can use Cloud DLP to find and redact credit card numbers from a chat transcript before you store the transcript." This directly cites the use case described in the question as a recommended practice.

Source: Google Cloud Architecture Framework, Security pillar, "Implement least privilege".

Question 10

You are using Cloud Shell and need to install a custom utility for use in a few weeks. Where can you store the file so it is in the default execution path and persists across sessions?
Options
A: ~/bin
B: Cloud Storage
C: /google/scripts
D: /usr/local/bin
Show Answer
Correct Answer:
~/bin
Explanation
Google Cloud Shell provides a 5 GB persistent disk mounted as your $HOME directory. This ensures that any files and configurations within $HOME are preserved across sessions. The Cloud Shell environment is pre-configured so that if a ~/bin directory exists, it is automatically added to the system's execution path ($PATH). Therefore, placing a custom utility in ~/bin meets both requirements: the file will persist for future use, and it can be executed directly by name from any location without specifying the full path.
Why Incorrect Options are Wrong

B. Cloud Storage: This is an object storage service, not a local filesystem directory in the execution path. The utility would need to be downloaded to the VM before it could be run.

C. /google/scripts: This directory is part of the ephemeral Cloud Shell virtual machine instance. Any files placed here will be lost when your session ends.

D. /usr/local/bin: While this directory is in the default execution path, it resides on the ephemeral VM's filesystem. It does not persist across sessions, so the utility would be deleted.

References

1. Google Cloud Documentation, "How Cloud Shell works", Section: Persistent disk storage: "Cloud Shell provisions 5 GB of persistent disk storage on your temporarily allocated virtual machine. This storage is located at your $HOME directory and persists between sessions... Any modifications you make to your home directory, including installed software, scripts, and user configuration files like .bashrc and .vimrc, persist between sessions."

2. Google Cloud Documentation, "Cloud Shell features", Section: A pre-configured gcloud CLI and other utilities: This section implies a standard Linux environment. In standard Linux configurations (like the Debian base for Cloud Shell), the default .profile script adds $HOME/bin to the $PATH if the directory exists. This behavior, combined with the persistence of the $HOME directory, makes ~/bin the correct location.

3. Google Cloud Documentation, "Customizing your Cloud Shell environment": This guide explains how to make persistent customizations. It states, "When you start Cloud Shell, a bash shell is run and any commands in ~/.bashrc and ~/.profile are executed." This confirms that standard shell startup scripts, which typically configure the path to include ~/bin, are honored and persist.

Question 11

You want to create a private connection between your instances on Compute Engine and your on- premises data center. You require a connection of at least 20 Gbps. You want to follow Google- recommended practices. How should you set up the connection?
Options
A: Create a VPC and connect it to your on-premises data center using Dedicated Interconnect.
B: Create a VPC and connect it to your on-premises data center using a single Cloud VPN.
C: Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises data center using Dedicated Interconnect.
D: Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises datacenter using a single Cloud VPN.
Show Answer
Correct Answer:
Create a VPC and connect it to your on-premises data center using Dedicated Interconnect.
Explanation
The requirement is for a private, high-bandwidth connection (at least 20 Gbps) between an on-premises data center and a Google Cloud VPC. Dedicated Interconnect is the Google-recommended service for this use case. It provides a direct, private, physical connection to Google's network. To achieve the 20 Gbps throughput requirement, you can provision two 10 Gbps Dedicated Interconnect circuits. This solution connects your on-premises environment directly to your VPC, providing the necessary performance and privacy.
Why Incorrect Options are Wrong

B. A single Cloud VPN tunnel provides a maximum of 3 Gbps of bandwidth, which does not meet the 20 Gbps requirement.

C. Cloud CDN is a content delivery network used for caching and distributing web content to users globally; it is not a service for establishing private network connectivity.

D. This option is incorrect for two reasons: Cloud CDN is the wrong service for this purpose, and a single Cloud VPN does not meet the bandwidth requirement.

References

1. Google Cloud Documentation, "Choosing a Network Connectivity product": In the comparison table under "Features," Cloud Interconnect is listed with a bandwidth of "10 Gbps or 100 Gbps per link," suitable for high-throughput needs. In contrast, Cloud VPN is listed with a bandwidth of "Up to 3 Gbps per tunnel." This directly supports the choice of Interconnect over VPN for the 20 Gbps requirement.

2. Google Cloud Documentation, "Dedicated Interconnect overview": This document states, "Dedicated Interconnect provides a direct physical connection between your on-premises network and Google's network... Connections are offered as one or more 10-Gbps or 100-Gbps Ethernet connections." This confirms that multiple 10 Gbps connections can be used to meet the 20 Gbps requirement.

3. Google Cloud Documentation, "Cloud CDN overview": This document describes Cloud CDN as a service that "uses Google's global edge network to bring content closer to your users." This clarifies that its purpose is content distribution, not establishing a private network link for backend services, making options C and D incorrect.

Question 12

You are analyzing and defining business processes to support your startupโ€™s trial usage of GCP, and you donโ€™t yet know what consumer demand for your product will be. Your manager requires you to minimize GCP service costs and adhere to Google best practices. What should you do?
Options
A: Utilize free tier and sustained use discounts. Provision a staff position for service cost management.
B: Utilize free tier and sustained use discounts. Provide training to the team about service cost management.
C: Utilize free tier and committed use discounts. Provision a staff position for service cost management.
D: Utilize free tier and committed use discounts. Provide training to the team about service cost management.
Show Answer
Correct Answer:
Utilize free tier and committed use discounts. Provide training to the team about service cost management.
Explanation
Google Cloud's best practices for cost management are centered around the principles of FinOps, which promotes a culture of cost accountability across the organization. For a startup with unknown future demand, the most effective strategy involves two key elements. First, providing training to the entire team ensures that developers and operators understand the cost implications of their architectural choices. Second, while demand is unpredictable now, the business process should be built to proactively identify a stable, baseline workload as soon as it emerges. Once identified, applying Committed Use Discounts (CUDs) to this baseline will yield the most significant savings (up to 70%). This proactive approach of planning for CUDs is superior to passively relying on less impactful, automatic discounts.
Why Incorrect Options are Wrong

A. Utilize free tier and sustained use discounts. Provision a staff position for service cost management.

Hiring a dedicated staff position for a startup in a trial phase is not cost-effective and contradicts the goal of minimizing costs.

B. Utilize free tier and sustained use discounts. Provide training to the team about service cost management.

Sustained Use Discounts (SUDs) are automatic and less impactful than CUDs. A best-practice business process should be proactive, planning to use the most effective tools, not just relying on passive, automatic ones.

C. Utilize free tier and committed use discounts. Provision a staff position for service cost management.

As with option A, provisioning a dedicated staff position is an unnecessary expense for a startup and is not a recommended best practice for this scenario.

References

1. Google Cloud Cost Management Solutions: The official documentation outlines key principles, including empowering teams and using intelligent pricing models. It states, "Empower your teams with the training, resources, and tools they need to operate with cost-efficiency" and "Take advantage of intelligent recommendations to... use pricing models like committed use discounts to optimize costs." This directly supports the combination of training and CUDs in option D. (Source: Google Cloud, "Cost Management", Section: "Key principles of cloud cost management").

2. Committed Use Discounts (CUDs) Documentation: CUDs are Google's primary tool for reducing costs on predictable workloads. The documentation states, "Committed use discounts (CUDs) provide deeply discounted prices in exchange for your commitment... The discounts are ideal for workloads with predictable resource needs." A best practice business process involves identifying these predictable needs and applying CUDs. (Source: Google Cloud, "Committed use discounts", Overview section).

3. Sustained Use Discounts (SUDs) Documentation: This page explains that SUDs are automatic and apply when resources run for a significant portion of the month. While beneficial, they are a passive mechanism and have been largely superseded by the more strategic CUDs for vCPU and memory, making them a less central part of a proactive cost management process. (Source: Google Cloud, "Sustained use discounts", Overview section).

Question 13

You are building a continuous deployment pipeline for a project stored in a Git source repository and want to ensure that code changes can be verified deploying to production. What should you do?
Options
A: Use Spinnaker to deploy builds to production using the red/black deployment strategy so that changes can easily be rolled back.
B: Use Spinnaker to deploy builds to production and run tests on production deployments.
C: Use Jenkins to build the staging branches and the master branch. Build and deploy changes to production for 10% of users before doing a complete rollout.
D: Use Jenkins to monitor tags in the repository. Deploy staging tags to a staging environment for testing. After testing, tag the repository for production and deploy that to the production environment.
Show Answer
Correct Answer:
Use Jenkins to monitor tags in the repository. Deploy staging tags to a staging environment for testing. After testing, tag the repository for production and deploy that to the production environment.
Explanation
This option describes a robust and standard Continuous Integration/Continuous Deployment (CI/CD) pipeline. It correctly separates the environments, using a staging environment for verification, which directly addresses the question's core requirement to "ensure that code changes can be verified deploying to production." Using Git tags (staging, production) is a common and effective practice to trigger deployments of specific, immutable versions of the code to the corresponding environments. This ensures that the exact code that was tested in staging is the code that gets deployed to production, following a successful verification process.
Why Incorrect Options are Wrong

A. Red/black (or blue/green) is a production deployment strategy that minimizes downtime. It does not, by itself, provide a pre-production verification stage, which is the primary requirement.

B. Running tests on production deployments is a high-risk practice. The goal is to find and fix issues before they reach production and impact users, not after.

C. A canary deployment (releasing to 10% of users) is a strategy for a phased rollout to production. Verification happens on a subset of live users, not in a dedicated test environment before production.

References

1. Google Cloud Documentation - CI/CD on Google Cloud: This documentation outlines the typical stages of a software delivery pipeline: Source, Build, Test, and Deploy. Option D aligns perfectly with this model by including a dedicated "Test" stage (in the staging environment) before the final "Deploy" stage (to production). The document emphasizes that "Each stage acts as a gate that vets a new change for quality."

Source: Google Cloud Documentation, "CI/CD modernization: A guide to building a software delivery pipeline", Section: "Stages of a software delivery pipeline".

2. Google Cloud Solutions - Continuous deployment to GKE using Jenkins: This official tutorial demonstrates a multi-environment pipeline. It uses separate Git branches (dev, production) to trigger deployments to different environments. Using tags, as described in option D, is an analogous and widely accepted best practice for managing releases, where a specific tag triggers the promotion of a build through the pipeline from staging to production.

Source: Google Cloud Solutions, "Continuous deployment to GKE using Jenkins", Section: "Understanding the application development and deployment workflow".

3. Google Cloud Documentation - Release and deployment strategies: This page describes strategies like blue/green (red/black) and canary deployments. It positions them as methods for the final step of deploying to production to reduce risk, which confirms that they are distinct from the pre-production verification step described in option D.

Source: Google Cloud Documentation, "Application deployment and testing strategies", Sections: "Blue/green deployments" and "Canary deployments".

Question 14

You have an outage in your Compute Engine managed instance group: all instance keep restarting after 5 seconds. You have a health check configured, but autoscaling is disabled. Your colleague, who is a Linux expert, offered to look into the issue. You need to make sure that he can access the VMs. What should you do?
Options
A: Grant your colleague the IAM role of project Viewer
B: Perform a rolling restart on the instance group
C: Disable the health check for the instance group. Add his SSH key to the project-wide SSH keys
D: Disable autoscaling for the instance group. Add his SSH key to the project-wide SSH Keys
Show Answer
Correct Answer:
Disable the health check for the instance group. Add his SSH key to the project-wide SSH keys
Explanation
The constant restarting of instances in a Managed Instance Group (MIG) with a health check configured is characteristic of the autohealing feature. The MIG detects that the instances are unhealthy (failing the health check within 5 seconds) and attempts to "heal" the group by recreating them. To stop this restart loop and allow for debugging, the autohealing mechanism must be paused by disabling or removing the health check from the MIG. Once the instances are stable, adding the colleague's public SSH key to the project-wide metadata is a standard and effective method to grant them SSH access to all instances in the project for troubleshooting.
Why Incorrect Options are Wrong

A. The IAM Viewer role is a read-only role and does not grant the necessary permissions (compute.instances.osLogin or compute.instances.setMetadata) to connect to a VM via SSH.

B. Performing a rolling restart will not solve the issue. The new instances created during the restart will still fail the health check and enter the same restart loop.

D. The question explicitly states that autoscaling is already disabled. The issue is caused by autohealing, which is a separate feature from autoscaling, even though both are part of MIGs.

References

1. Autohealing and Health Checks: Google Cloud documentation states, "Autohealing relies on a health check to determine if an application on an instance is responding as expected... If the health check determines that an application has failed, the group automatically recreates that instance." To stop this, the health check must be removed.

Source: Google Cloud Documentation, "Setting up health checking and autohealing," Section: "How autohealing works."

2. Updating a Managed Instance Group: To disable the health check, you must update the instance group's configuration. The documentation outlines procedures for updating MIGs, which includes removing an associated health check.

Source: Google Cloud Documentation, "Updating managed instance groups (MIGs)," Section: "Updating a health check for a MIG."

3. Managing SSH Keys: Project-wide public SSH keys can be used to grant access to all instances in a project. "When you add a public SSH key to a project, any user who has the private key can connect to any VM in that project that is configured to accept project-wide keys."

Source: Google Cloud Documentation, "Managing SSH keys in metadata," Section: "Adding and removing project-wide public SSH keys."

4. IAM Roles for Compute Engine: The documentation for the Viewer role (roles/viewer) confirms it does not include permissions like compute.instances.setMetadata or IAP-based access, which are required for SSH connections.

Source: Google Cloud Documentation, "Compute Engine IAM roles," Section: "Project roles."

Question 15

Your company is migrating its on-premises data center into the cloud. As part of the migration, you want to integrate Kubernetes Engine for workload orchestration. Parts of your architecture must also be PCI DSScompliant. Which of the following is most accurate?
Options
A: App Engine is the only compute platform on GCP that is certified for PCI DSS hosting.
B: Kubernetes Engine cannot be used under PCI DSS because it is considered shared hosting.
C: Kubernetes Engine and GCP provide the tools you need to build a PCI DSS-compliant environment.
D: All Google Cloud services are usable because Google Cloud Platform is certified PCI-compliant.
Show Answer
Correct Answer:
Kubernetes Engine and GCP provide the tools you need to build a PCI DSS-compliant environment.
Explanation
The relationship between a cloud provider and a customer regarding compliance standards like PCI DSS is governed by a shared responsibility model. Google ensures that the underlying infrastructure and specific services, including Google Kubernetes Engine (GKE), meet PCI DSS requirements, as evidenced by their Attestation of Compliance (AoC). However, this does not automatically make a customer's application compliant. The customer is responsible for correctly configuring their GKE clusters, networks, IAM policies, and applications using the tools Google provides. Therefore, GKE and GCP provide the necessary components and capabilities for a customer to build and maintain a PCI DSS-compliant environment, but the customer must implement them correctly.
Why Incorrect Options are Wrong

A. This is incorrect. Google's list of services covered by its PCI DSS compliance includes many compute platforms, such as Compute Engine, Cloud Functions, and Google Kubernetes Engine, not just App Engine.

B. This is incorrect. GKE can be configured to meet PCI DSS requirements. Using features like VPC-native clusters with network policies, IAM for GKE, and private clusters allows for the necessary network segmentation and isolation.

D. This is a common misconception. Not all Google Cloud services are in scope for PCI DSS. Furthermore, even for in-scope services, the platform's compliance does not confer compliance on the customer's application; it is a shared responsibility.

References

1. Google Cloud Compliance Offerings: PCI DSS: "Google Cloud undergoes a third-party audit to certify individual products against the PCI DSS. This means that these services provide an infrastructure on top of which customers can build their own service or application that stores, processes, or transmits cardholder data... PCI DSS compliance is a shared responsibility." The page lists Google Kubernetes Engine as an in-scope product.

Source: Google Cloud Documentation, "Compliance offerings: PCI DSS," Section: "Shared Responsibility."

2. PCI DSS compliance on Google Kubernetes Engine: "This guide describes how to build a Payment Card Industry Data Security Standard (PCI DSS) compliant environment on Google Kubernetes Engine (GKE)... GKE provides many features that can help you meet the PCI DSS requirements, such as private clusters, VPC-native clusters, network policy, Google Groups for GKE, and GKE Sandbox."

Source: Google Cloud Architecture Center, "PCI DSS compliance on Google Kubernetes Engine," Introduction.

3. Shared responsibility for PCI DSS compliance on Google Cloud: "Although Google Cloud is a PCI DSS-validated service provider, you are still responsible for your own PCI DSS compliance for your applications on Google Cloud."

Source: Google Cloud Architecture Center, "PCI DSS on Google Cloud," Section: "Shared responsibility."

Question 16

Your company has multiple on-premises systems that serve as sources for reporting. The data has not been maintained well and has become degraded over time. You want to use Google-recommended practices to detect anomalies in your company dat a. What should you do?
Options
A: Upload your files into Cloud Storage. Use Cloud Datalab to explore and clean your data.
B: Upload your files into Cloud Storage. Use Cloud Dataprep to explore and clean your data.
C: Connect Cloud Datalab to your on-premises systems. Use Cloud Datalab to explore and clean your data.
D: Connect Cloud Dataprep to your on-premises systems. Use Cloud Dataprep to explore and clean your data.
Show Answer
Correct Answer:
Upload your files into Cloud Storage. Use Cloud Dataprep to explore and clean your data.
Explanation
The Google-recommended practice for this scenario involves a two-stage process. First, data from on-premises systems should be ingested and staged in a scalable, durable, and cost-effective object store, for which Cloud Storage is the standard choice. Second, Cloud Dataprep is Google's intelligent, serverless data preparation service specifically designed for visually exploring, cleaning, and preparing data without writing code. It automatically profiles data to detect anomalies, outliers, and missing values, and suggests transformations, directly addressing the user's need to handle degraded data and detect anomalies efficiently. This approach decouples data ingestion from processing and utilizes the purpose-built tool for data quality.
Why Incorrect Options are Wrong

A. Cloud Datalab (now part of Vertex AI Workbench) is a notebook-based environment for data scientists. While powerful, it requires writing custom code for data cleaning and anomaly detection, making it less efficient for this task than a dedicated visual tool.

C. Connecting a cloud service directly to multiple on-premises systems for processing is complex and not a standard ingestion pattern for batch data. Furthermore, Cloud Datalab is not the optimal tool for this use case.

D. While Cloud Dataprep is the correct tool, the recommended architectural pattern is to first land the data in a central staging area like Cloud Storage, rather than connecting directly to multiple on-premises sources.

References

1. Cloud Dataprep Overview: "Cloud Dataprep by Trifacta is an intelligent data service for visually exploring, cleaning, and preparing structured and unstructured data... With each UI interaction, Cloud Dataprep automatically suggests and predicts the next data transformation, which can help you reduce the time to get insights." This highlights its purpose for cleaning and anomaly detection.

Source: Google Cloud Documentation, "Cloud Dataprep overview".

2. Cloud Storage as a Staging Area: "Cloud Storage is an ideal place to land data from other clouds or on-premises before it's moved into a data warehouse like BigQuery or a data lakehouse like Dataplex." This confirms the pattern of uploading files to Cloud Storage first.

Source: Google Cloud Documentation, "Build a modern, open, and intelligent data cloud".

3. Data Lifecycle on Google Cloud: Reference architectures for analytics pipelines consistently show data being ingested from on-premises sources into Cloud Storage before being processed by services like Cloud Dataprep, Dataflow, or Dataproc.

Source: Google Cloud Architecture Center, "Data lifecycle on Google Cloud".

4. Comparing Data Preparation Tools: "Cloud Dataprep is a good choice when you want to use a UI to build your data preparation pipelines... Vertex AI Workbench is a good choice when you want to use a notebook to explore and prepare your data." This distinguishes the use cases, positioning Dataprep as the correct choice for the visual, UI-driven task described in the question.

Source: Google Cloud Documentation, "Data preparation options".

Question 17

Google Cloud Platform resources are managed hierarchically using organization, folders, and projects. When Cloud Identity and Access Management (IAM) policies exist at these different levels, what is the effective policy at a particular node of the hierarchy?
Options
A: The effective policy is determined only by the policy set at the node
B: The effective policy is the policy set at the node and restricted by the policies of its ancestors
C: The effective policy is the union of the policy set at the node and policies inherited from its ancestors
D: The effective policy is the intersection of the policy set at the node and policies inherited from its ancestors
Show Answer
Correct Answer:
The effective policy is the union of the policy set at the node and policies inherited from its ancestors
Explanation
The Google Cloud resource hierarchy allows for the inheritance of Identity and Access Management (IAM) policies. The effective policy for any given resource is the combination of the policy set directly on that resource and the policies inherited from its ancestors (Project, Folder(s), and Organization). This inheritance model is additive. Specifically, the effective policy is the union of all applicable policies. This means if a user is granted a permission at a higher level in the hierarchy (e.g., a folder), they will have that permission on all resources within that folder, in addition to any permissions granted at the project or resource level.
Why Incorrect Options are Wrong

A. This is incorrect because it ignores the fundamental principle of policy inheritance in the Google Cloud resource hierarchy. Policies from parent nodes are always inherited by child nodes.

B. This is incorrect because the standard IAM policy model is additive, not restrictive. Permissions granted at an ancestor level are added to, not used to restrict, permissions at a lower level. While Deny Policies can restrict permissions, the fundamental mechanism for calculating the effective allow policy is a union.

D. This is incorrect because an intersection would require a permission to be granted at every level of the hierarchy to be effective. This would be overly restrictive and is not how Google Cloud IAM operates.

References

1. Google Cloud Documentation, IAM Overview, "Policy inheritance": "The effective policy for a resource is the union of the policy set on the resource and the policy inherited from its parent. This policy evaluation is transitive. For example, if you set a policy at the Organization level, it is inherited by all of its child folders and projects. If you set a policy at the project level, it is inherited by all of its child resources."

2. Google Cloud Documentation, Resource Manager, "Overview of the resource hierarchy", Section: "IAM policy inheritance": "The allow policy for a resource is the union of the allow policy set on the resource and the allow policy inherited from its ancestors. This rule applies to the entire resource hierarchy."

3. Google Cloud Documentation, IAM, "Hierarchical policy evaluation": "The full set of permissions for a resource is the union of permissions granted on that resource and permissions inherited from higher up in the hierarchy. In other words, permissions are inherited through the resource hierarchy."

Question 18

You are migrating your on-premises solution to Google Cloud in several phases. You will use Cloud VPN to maintain a connection between your on-premises systems and Google Cloud until the migration is completed. You want to make sure all your on-premises systems remain reachable during this period. How should you organize your networking in Google Cloud?
Options
A: Use the same IP range on Google Cloud as you use on-premises
B: Use the same IP range on Google Cloud as you use on-premises for your primary IP range and use a secondary range that does not overlap with the range you use on-premises
C: Use an IP range on Google Cloud that does not overlap with the range you use on-premises
D: Use an IP range on Google Cloud that does not overlap with the range you use on-premises for your primary IP range and use a secondary range with the same IP range as you use on-premises
Show Answer
Correct Answer:
Use an IP range on Google Cloud that does not overlap with the range you use on-premises
Explanation
For a hybrid network connection using Cloud VPN to function correctly, the IP address ranges of the connected networks (on-premises and Google Cloud VPC) must be unique and not overlap. If the IP ranges overlap, routes become ambiguous. When a VM in the VPC tries to send a packet to an IP address that exists in both networks, the routing decision is unpredictable, often resulting in the packet being routed locally and never reaching the intended on-premises destination. Using a distinct IP range on Google Cloud ensures that Cloud Router can correctly advertise and learn routes via BGP, enabling reliable and predictable communication between the two environments.
Why Incorrect Options are Wrong

A. Use the same IP range on Google Cloud as you use on-premises

This creates a direct IP address conflict, which breaks routing between the two networks and makes systems unreachable.

B. Use the same IP range on Google Cloud as you use on-premises for your primary IP range and use a secondary range that does not overlap with the range you use on-premises

The overlapping primary IP range still causes the fundamental routing conflict, making the connection unreliable regardless of the secondary range.

D. Use an IP range on Google Cloud that does not overlap with the range you use on-premises for your primary IP range and use a secondary range with the same IP range as you use on-premises

Introducing an overlapping secondary range re-creates the routing conflict for any resource using that range, leading to connectivity failures.

References

1. Google Cloud Documentation, Cloud VPN Overview, "Network and subnet considerations": "The IP ranges for subnets in your VPC network cannot overlap with the IP ranges used in your on-premises network. If they do, traffic routing issues can occur." This section explicitly states the core requirement for non-overlapping IP ranges.

2. Google Cloud Documentation, "Best practices and reference architectures for VPC design," Section: "Hybrid networking": "When you connect your VPC network to another network, such as an on-premises network... the subnet IP ranges in the VPC network and the other network can't overlap. If you have overlapping IP addresses, traffic doesn't route correctly." This reinforces the principle as a best practice for any hybrid connectivity scenario.

3. Google Cloud Documentation, Cloud Router Overview, "BGP sessions": Cloud Router uses BGP to exchange routes between your VPC network and your on-premises network. The BGP protocol relies on unique network prefixes (IP ranges) to build a coherent routing table. Overlapping prefixes make it impossible for BGP to establish predictable routes.

Question 19

You have found an error in your App Engine application caused by missing Cloud Datastore indexes. You have created a YAML file with the required indexes and want to deploy these new indexes to Cloud Datastore. What should you do?
Options
A: Point gcloud datastore create-indexes to your configuration file
B: Upload the configuration file the App Engineโ€™s default Cloud Storage bucket, and have App Engine detect the new indexes
C: In the GCP Console, use Datastore Admin to delete the current indexes and upload the new configuration file
D: Create an HTTP request to the built-in python module to send the index configuration file to your application
Show Answer
Correct Answer:
Point gcloud datastore create-indexes to your configuration file
Explanation
The standard and recommended method for deploying Cloud Datastore indexes defined in a configuration file is by using the gcloud command-line tool. The command gcloud datastore create-indexes is specifically designed for this purpose. It takes the path to the index.yaml file as an argument and initiates the process of creating the specified indexes in your project's Cloud Datastore instance. This operation is non-destructive; it only adds new indexes and does not affect existing ones.
Why Incorrect Options are Wrong

B. App Engine does not automatically detect or apply index configuration files from a Cloud Storage bucket. Index deployment is an explicit administrative action.

C. Deleting current indexes is unnecessary and potentially harmful, as it could break existing application queries. The correct command updates indexes without requiring deletion.

D. Index management is a control plane operation performed via tools like gcloud or the GCP Console, not through runtime HTTP requests to application modules.

References

1. Google Cloud Documentation - Managing Datastore indexes: "To upload your indexes to Datastore, run the following command from the directory where your index.yaml file is located: gcloud datastore create-indexes index.yaml". This page explicitly details the command in option A as the correct procedure.

Source: Google Cloud, "Managing Datastore indexes", section "Uploading indexes".

2. Google Cloud SDK Documentation - gcloud datastore create-indexes: This official command reference describes the usage: "gcloud datastore create-indexes INDEXFILE ... This command creates new Datastore indexes from a local file." This directly validates option A.

Source: Google Cloud SDK Documentation, gcloud datastore create-indexes.

3. Google Cloud Documentation - index.yaml Configuration: "When you have finished creating your index.yaml file, deploy it to Datastore. See Managing Datastore Indexes for details." This confirms that index.yaml is the configuration file and it must be actively deployed.

Source: Google Cloud, "Configuring Datastore indexes with index.yaml".

Question 20

You have an application that will run on Compute Engine. You need to design an architecture that takes into account a disaster recovery plan that requires your application to fail over to another region in case of a regional outage. What should you do?
Options
A: Deploy the application on two Compute Engine instances in the same project but in a different region. Use the first instance to serve traffic, and use the HTTP load balancing service to fail over to the standby instance in case of a disaster.
B: Deploy the application on a Compute Engine instance. Use the instance to serve traffic, and use the HTTP load balancing service to fail over to an instance on your premises in case of a disaster.
C: Deploy the application on two Compute Engine instance groups, each in the same project but in a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance group in case of a disaster.
D: Deploy the application on two Compute Engine instance groups, each in separate project and a different region. Use the first instance group to server traffic, and use the HTTP load balancing service to fail over to the standby instance in case of a disaster.
Show Answer
Correct Answer:
Deploy the application on two Compute Engine instance groups, each in the same project but in a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance group in case of a disaster.
Explanation
This architecture represents a standard active-passive disaster recovery (DR) pattern. A Global External HTTP(S) Load Balancer is used to direct traffic to a primary managed instance group (MIG) in one region. For DR, a second, standby MIG is deployed in a different region. The load balancer's health checks continuously monitor the primary MIG. If a regional outage makes the primary MIG unhealthy, the load balancer automatically fails over, redirecting all user traffic to the healthy standby MIG in the secondary region. Using instance groups, rather than single instances, provides high availability and scalability within each region.
Why Incorrect Options are Wrong

A. Using single Compute Engine instances in each region creates single points of failure and lacks the scalability and auto-healing benefits provided by managed instance groups.

B. This describes a hybrid cloud DR strategy, not a multi-region cloud-native one. The primary deployment on a single instance is not a resilient or highly available design.

D. Deploying in separate projects adds unnecessary operational and networking complexity (e.g., requiring Shared VPC or VPC Peering) without providing any direct benefit for the stated DR goal.

References

1. Google Cloud Documentation, Disaster recovery planning guide: This guide outlines DR patterns. The "Warm standby (active-passive)" pattern for applications describes using Cloud Load Balancing to fail over between regions. It states, "You configure a health check that determines when to fail over to the VMs in [REGIONB]." This directly supports the architecture in option C.

Source: Google Cloud, "Disaster recovery scenarios for applications", Section: "Warm standby (active-passive)".

2. Google Cloud Documentation, Cloud Load Balancing overview: The documentation for the Global External HTTP(S) Load Balancer explicitly states its multi-region capabilities. "Global external HTTP(S) Load Balancer is a global load balancer... You can use this load balancer to route traffic to backends in multiple regions." This confirms it is the correct tool for cross-region failover.

Source: Google Cloud, "Cloud Load Balancing overview", Section: "Types of load balancers".

3. Google Cloud Documentation, Managed instance groups (MIGs): This document explains the benefits of MIGs over single instances, including high availability through autohealing and scalability through autoscaling, which are critical for a robust production architecture.

Source: Google Cloud, "Managed instance groups (MIGs)", Section: "Benefits of MIGs".

4. Google Cloud Architecture Framework, Reliability pillar: This framework recommends deploying applications across multiple zones and regions to protect against failures. "To protect against regional failure, you need to deploy your application in multiple regions... you can use Cloud Load Balancing to provide a single IP address for users."

Source: Google Cloud Architecture Framework, "Reliability pillar", Section: "Design for disaster recovery".

Question 21

You are deploying an application on App Engine that needs to integrate with an on-premises database. For security purposes, your on-premises database must not be accessible through the public Internet. What should you do?
Options
A: Deploy your application on App Engine standard environment and use App Engine firewall rules to limit access to the open on-premises database.
B: Deploy your application on App Engine standard environment and use Cloud VPN to limit access to the onpremises database.
C: Deploy your application on App Engine flexible environment and use App Engine firewall rules to limit access to the on-premises database.
D: Deploy your application on App Engine flexible environment and use Cloud VPN to limit access to the on-premises database.
Show Answer
Correct Answer:
Deploy your application on App Engine flexible environment and use Cloud VPN to limit access to the on-premises database.
Explanation
The App Engine flexible environment allows applications to be deployed within a Virtual Private Cloud (VPC) network. This is a critical capability for this scenario. By placing the application in a VPC, it can leverage Google Cloud's networking services, such as Cloud VPN. Cloud VPN establishes a secure, private IPsec tunnel between the Google Cloud VPC and the on-premises network. This configuration enables the App Engine application to communicate with the on-premises database using private IP addresses, fulfilling the core security requirement that the database is not exposed to the public internet.
Why Incorrect Options are Wrong

A. App Engine firewall rules control inbound traffic to the application, not outbound connections to a database. The standard environment also lacks the required native VPC integration.

B. The App Engine standard environment runs in a sandboxed environment and cannot be placed directly within a VPC to initiate connections over a Cloud VPN tunnel.

C. App Engine firewall rules are the incorrect tool; they manage incoming requests to the application, not secure outgoing connections to an external database.

References

1. App Engine Flexible Environment Networking: The official documentation states that App Engine flexible environment instances run on Compute Engine virtual machines within your project's VPC network. This enables direct access to other VPC resources.

Source: Google Cloud Documentation, "App Engine flexible environment, Connecting to a VPC network". Section: "Accessing Google Cloud services".

2. Comparison of App Engine Environments: The choice between standard and flexible environments often depends on networking requirements. The flexible environment is explicitly designed for applications that need to operate within a VPC and access resources via VPN or Interconnect.

Source: Google Cloud Documentation, "Choosing an App Engine environment". The table comparing features highlights that the flexible environment allows "Access to resources in a VPC network".

3. Cloud VPN for Hybrid Connectivity: Cloud VPN is the designated service for securely connecting a VPC network to an on-premises network over an IPsec VPN connection.

Source: Google Cloud Documentation, "Cloud VPN overview". Section: "How Cloud VPN works".

4. App Engine Firewall Functionality: The documentation clarifies that App Engine firewalls are used to control incoming requests to the application from specified IP ranges, not for securing outbound traffic.

Source: Google Cloud Documentation, "Controlling Ingress Traffic". Section: "Creating firewalls".

Question 22

You are working in a highly secured environment where public Internet access from the Compute Engine VMs is not allowed. You do not yet have a VPN connection to access an on-premises file server. You need to install specific software on a Compute Engine instance. How should you install the software?
Options
A: Upload the required installation files to Cloud Storage. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using gsutil.
B: Upload the required installation files to Cloud Storage and use firewall rules to block all traffic except the IP address range for Cloud Storage. Download the files to the VM using gsutil.
C: Upload the required installation files to Cloud Source Repositories. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using gcloud.
D: Upload the required installation files to Cloud Source Repositories and use firewall rules to block all traffic except the IP address range for Cloud Source Repositories. Download the files to the VM using gsutil.
Show Answer
Correct Answer:
Upload the required installation files to Cloud Storage. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using gsutil.
Explanation
This solution correctly addresses the security requirement of having no public internet access by assigning only an internal IP address to the VM. It then leverages Private Google Access, which is the designated mechanism for VMs with only internal IPs to reach the public IP addresses of Google APIs and services, such as Cloud Storage. Cloud Storage is the appropriate service for storing arbitrary files like software installers. The gsutil command-line tool is the standard method for interacting with Cloud Storage from a VM. This approach is secure, efficient, and follows Google-recommended best practices for this scenario.
Why Incorrect Options are Wrong

B: Using firewall rules to allow traffic to Cloud Storage IP ranges still requires the VM to have an external IP address to route traffic, which violates the core security constraint of no public internet access.

C: Cloud Source Repositories is a managed Git repository service designed for source code, not for storing software installation binaries or packages. Cloud Storage is the appropriate service for this use case.

D: This option is incorrect for two reasons: it uses the inappropriate service (Cloud Source Repositories) for the file type, and it proposes using the wrong tool (gsutil) to access it.

References

1. Private Google Access: "Private Google Access provides access to the external IP addresses of Google APIs and services from VMs that don't have external IP addresses." This directly supports the method described in option A.

Source: Google Cloud Documentation, "Private Google Access overview", Section: "Overview".

2. Cloud Storage Use Cases: "Cloud Storage is a service for storing your objects in Google Cloud... Cloud Storage is a good choice for storing data for archival and disaster recovery, or for distributing large data objects to users via direct download." This confirms Cloud Storage is the correct service for installation files.

Source: Google Cloud Documentation, "Cloud Storage - Product overview".

3. gsutil Tool: "gsutil is a Python application that lets you access Cloud Storage from the command line." This confirms gsutil is the correct tool for downloading files from Cloud Storage.

Source: Google Cloud Documentation, "gsutil tool".

4. Cloud Source Repositories Use Cases: "Cloud Source Repositories provides Git version control to support developing and deploying your app." This highlights its purpose for source code, making it an inappropriate choice for software installers.

Source: Google Cloud Documentation, "Cloud Source Repositories - Product overview".

Question 23

Your company is moving 75 TB of data into Google Cloud. You want to use Cloud Storage and follow Googlerecommended practices. What should you do?
Options
A: Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage.
B: Move your data onto a Transfer Appliance. Use Cloud Dataprep to decrypt the data into Cloud Storage.
C: Install gsutil on each server that contains data. Use resumable transfers to upload the data into Cloud Storage.
D: Install gsutil on each server containing data. Use streaming transfers to upload the data into Cloud Storage.
Show Answer
Correct Answer:
Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage.
Explanation
For transferring large datasets (tens of terabytes to petabytes), Google's recommended practice is to use an offline transfer method to avoid the time and cost constraints of network transfers. Transfer Appliance is specifically designed for this purpose. The process involves receiving a physical appliance, copying the 75 TB of data to it, and shipping it back to Google. At the Google ingestion facility, the encrypted data is decrypted and transferred (a process sometimes referred to as rehydration) into the designated Cloud Storage bucket. This method is significantly faster and more reliable than uploading 75 TB over a standard internet connection.
Why Incorrect Options are Wrong

B: Cloud Dataprep is a service for visually exploring, cleaning, and preparing structured and unstructured data for analysis and machine learning. It is not used for data transfer or decryption from a Transfer Appliance.

C: Using gsutil for a 75 TB transfer would be extremely slow and unreliable over most internet connections, potentially taking weeks or months. While resumable transfers add reliability, Transfer Appliance is the recommended solution for this data volume.

D: This option is incorrect for the same reason as C; the data volume is too large for an online transfer to be the recommended practice. Additionally, streaming transfers are typically used for data from stdin, not for uploading a large set of existing files.

References

1. Google Cloud Documentation, "Choosing a transfer option": The official decision tree recommends Transfer Appliance for transferring more than 20 TB of data if the transfer would take more than one week online. For 75 TB, this is almost always the case.

Source: Google Cloud Documentation, "Data Transfer", Section: "Choosing a transfer option".

2. Google Cloud Documentation, "Transfer Appliance overview": This document explicitly states, "Transfer Appliance is a hardware appliance you can use to securely migrate large amounts of data... to Google Cloud Storage." It is recommended for data sizes from hundreds of terabytes up to 1 petabyte.

Source: Google Cloud Documentation, "Transfer Appliance", Section: "Overview".

3. Google Cloud Documentation, "How Transfer Appliance works": The process is detailed as: 1. Request appliance, 2. Copy data to appliance, 3. Ship appliance back to Google, 4. Google uploads your data to Cloud Storage. This confirms that Google handles the final decryption and upload step.

Source: Google Cloud Documentation, "Transfer Appliance", Section: "How Transfer Appliance works".

Question 24

You have an application deployed on Kubernetes Engine using a Deployment named echo- deployment. The deployment is exposed using a Service called echo-service. You need to perform an update to the application with minimal downtime to the application. What should you do?
Options
A: Use kubect1 set image deployment/echo-deployment
B: Use the rolling update functionality of the Instance Group behind the Kubernetes cluster
C: Update the deployment yaml file with the new container image. Use kubect1 delete deployment/ echo-deployment and kubect1 create โ€“f
D: Update the service yaml file which the new container image. Use kubect1 delete service/echoservice and kubect1 create โ€“f
Show Answer
Correct Answer:
Use kubect1 set image deployment/echo-deployment
Explanation
The kubectl set image command is the standard imperative method for updating the container image of a running Kubernetes Deployment. This command modifies the Deployment's Pod template, which triggers a rolling update by default. During a rolling update, Kubernetes gradually terminates old Pods while simultaneously creating new ones with the updated image. This process ensures that a minimum number of Pods are always running and available to handle traffic, thereby fulfilling the requirement for minimal downtime.
Why Incorrect Options are Wrong

B. Updating the underlying Instance Group modifies the cluster's nodes (VMs), not the application containers running on them. This is for infrastructure updates, not application updates.

C. Deleting and then recreating the Deployment is a "recreate" strategy. This causes a complete service outage because all old Pods are terminated before any new ones are started.

D. A Kubernetes Service is a networking object for exposing Pods; it does not define the container image. Modifying the Service will not update the application code.

References

1. Kubernetes Official Documentation, "Deployments": Under the section "Updating a Deployment," the documentation explicitly demonstrates using kubectl set image deployment/ = as the imperative command to trigger a rolling update. It states, "This is the default and desired way if you want to have your Pods updated without an outage."

Reference: Kubernetes Documentation > Concepts > Workloads > Controllers > Deployment > Updating a Deployment.

2. Google Cloud Documentation, "Deploying a containerized web application": This official GKE tutorial guides users through deploying and updating an application. In "Step 5: Update the application," it uses the kubectl set image command to perform a zero-downtime rolling update on a GKE cluster.

Reference: Google Cloud > Kubernetes Engine > Documentation > Tutorials > Deploying a containerized web application > Step 5: Update the application.

3. Google Cloud Documentation, "Deployment strategies": This document describes different deployment strategies, highlighting that the default strategy for a Kubernetes Deployment object is a rolling update, which "slowly replaces pods of the previous version of your application with pods of the new version," ensuring zero downtime.

Reference: Google Cloud > Architecture Center > Application development > Deployment strategies.

Question 25

Your company is using BigQuery as its enterprise data warehouse. Data is distributed over several Google Cloud projects. All queries on BigQuery need to be billed on a single project. You want to make sure that no query costs are incurred on the projects that contain the dat a. Users should be able to query the datasets, but not edit them. How should you configure usersโ€™ access roles?
Options
A: Add all users to a group. Grant the group the role of BigQuery user on the billing project and BigQuery dataViewer on the projects that contain the data.
B: Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery user on the projects that contain the data.
C: Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project and BigQuery dataViewer on the projects that contain the data.
D: Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery jobUser on the projects that contain the data.
Show Answer
Correct Answer:
Add all users to a group. Grant the group the role of BigQuery user on the billing project and BigQuery dataViewer on the projects that contain the data.
Explanation
This configuration correctly separates billing from data access. The roles/bigquery.user role, granted on the central billing project, includes the bigquery.jobs.create permission. This allows users to run query jobs, and the costs for these jobs are attributed to this project. The roles/bigquery.dataViewer role, granted on the projects containing the data, provides the necessary read-only permissions (e.g., bigquery.tables.getData) for the query job to access the data. This role does not include permissions to run jobs, ensuring no query costs are incurred on the data projects, and it prevents users from editing the data, fulfilling all requirements.
Why Incorrect Options are Wrong

B: This configuration is reversed. Users would be billed on the data projects and would lack the permission (bigquery.jobs.create) to run queries in the designated billing project.

C: While roles/bigquery.jobUser also grants permission to run jobs, roles/bigquery.user is the more standard and appropriate role as it also allows users to list datasets in the project, which is a common user requirement. Option A is the better and more complete answer.

D: This configuration is also reversed and would not work. Users would be billed on the data projects and would be unable to run queries from the central billing project.

References

1. Google Cloud Documentation, "Introduction to IAM for BigQuery": This document outlines the principle of separating compute and storage. It states, "The project that is billed for a query is the project in which the job is run." To run a job, a user needs bigquery.jobs.create permission, which is part of the bigquery.user role.

Source: Google Cloud Documentation, BigQuery > Security and data governance > Access control > Introduction to IAM.

2. Google Cloud Documentation, "Predefined roles and permissions" for BigQuery: This page details the specific permissions within each role.

roles/bigquery.user: "When applied to a project, provides the ability to run jobs, including queries, within the project." It contains bigquery.jobs.create.

roles/bigquery.dataViewer: "When applied to a dataset, provides read-only access to the dataset's tables and views." It contains bigquery.tables.getData but not bigquery.jobs.create.

Source: Google Cloud Documentation, BigQuery > Security and data governance > Access control > Predefined roles and permissions.

3. Google Cloud Documentation, "Control access to projects": This section explicitly states, "To run a query job, a user must have the bigquery.jobs.create permission. This permission is included in the predefined bigquery.user and bigquery.jobUser project-level IAM roles." This confirms the need for the user or jobUser role on the project where queries are to be run and billed.

Source: Google Cloud Documentation, BigQuery > Security and data governance > Access control > Control access to projects.

Question 26

You have developed an application using Cloud ML Engine that recognizes famous paintings from uploaded images. You want to test the application and allow specific people to upload images for the next 24 hours. Not all users have a Google Account. How should you have users upload images?
Options
A: Have users upload the images to Cloud Storage. Protect the bucket with a password that expires after 24 hours.
B: Have users upload the images to Cloud Storage using a signed URL that expires after 24 hours.
C: Create an App Engine web application where users can upload images. Configure App Engine to disable the application after 24 hours. Authenticate users via Cloud Identity.
D: Create an App Engine web application where users can upload images for the next 24 hours. Authenticate users via Cloud Identity.
Show Answer
Correct Answer:
Have users upload the images to Cloud Storage using a signed URL that expires after 24 hours.
Explanation
Signed URLs are the most appropriate solution for this scenario. They provide a mechanism to grant time-limited, resource-specific permissions to anyone, regardless of whether they have a Google Account. You can generate a signed URL that grants write (upload) permission to a Cloud Storage bucket and set its expiration to 24 hours. This URL can then be distributed to the specific testers, allowing them to upload images directly and securely without requiring authentication or a complex intermediary application.
Why Incorrect Options are Wrong

A: Cloud Storage does not have a native feature to protect buckets with a simple password. Access control is managed through Identity and Access Management (IAM), Access Control Lists (ACLs), or signed URLs.

C: Building an entire App Engine application is overly complex for this temporary testing need. Furthermore, Cloud Identity is used to manage users within an organization and is not suitable for external testers who do not have Google Accounts.

D: This option shares the same flaws as option C. It proposes an unnecessarily complex solution (an App Engine app) and suggests an authentication method (Cloud Identity) that contradicts the requirement that not all users have a Google Account.

---

References

1. Google Cloud Documentation - Signed URLs Overview: "A signed URL is a URL that provides limited permission and time to make a request... Anyone who is in possession of the signed URL can use it to perform the specified actions... within the specified time." This directly supports providing access to users without Google accounts for a limited time.

Source: Google Cloud Documentation, "Cloud Storage > Documentation > Security > Signed URLs Overview".

2. Google Cloud Documentation - Using signed URLs: "You can use signed URLs to give time-limited resource access to anyone, regardless of whether they have a Google account." This explicitly confirms that signed URLs are the correct mechanism for users without Google accounts. The documentation also details setting an expiration time.

Source: Google Cloud Documentation, "Cloud Storage > Documentation > How-to Guides > Using signed URLs".

3. Google Cloud Documentation - What is Cloud Identity?: "Cloud Identity is an Identity as a Service (IDaaS) solution that lets you centrally manage users and groups." This service is for managing known, provisioned identities, making it unsuitable for the ad-hoc, external users described in the scenario.

Source: Google Cloud Documentation, "Cloud Identity > Documentation > Overview > What is Cloud Identity?".

4. Google Cloud Documentation - Overview of access control: This document outlines the methods for controlling access to Cloud Storage buckets and objects. The listed methods are IAM, ACLs, signed URLs, and signed policy documents. It makes no mention of a password-protection feature for buckets.

Source: Google Cloud Documentation, "Cloud Storage > Documentation > Security > Overview of access control".

Question 27

Your web application must comply with the requirements of the European Unionโ€™s General Data Protection Regulation (GDPR). You are responsible for the technical architecture of your web application. What should you do?
Options
A: Ensure that your web application only uses native features and services of Google Cloud Platform, because Google already has various certifications and provides โ€œpass-onโ€ compliance when you use native features.
B: Enable the relevant GDPR compliance setting within the GCPConsole for each of the services in use within your application.
C: Ensure that Cloud Security Scanner is part of your test planning strategy in order to pick up any compliance gaps.
D: Define a design for the security of data in your web application that meets GDPR requirements.
Show Answer
Correct Answer:
Define a design for the security of data in your web application that meets GDPR requirements.
Explanation
The General Data Protection Regulation (GDPR) mandates the principle of "data protection by design and by default" (Article 25). As the technical architect, your primary responsibility is to create a system architecture that embeds these principles. This involves defining how data is collected, processed, stored, and protected throughout its lifecycle to meet GDPR's stringent requirements. Compliance is a shared responsibility. While Google Cloud provides a secure and compliant infrastructure (security of the cloud), you, the customer, are responsible for building a compliant application on that infrastructure (security in the cloud). Therefore, you must proactively design the application's data security measures to align with GDPR obligations.
Why Incorrect Options are Wrong

A. This is incorrect because compliance is a shared responsibility. Google's certifications for its infrastructure do not automatically confer compliance upon the applications you build on it.

B. There is no single, universal "GDPR compliance setting" in the Google Cloud Console. Achieving compliance requires a comprehensive approach involving architecture, configuration, and operational processes.

C. Cloud Security Scanner is a tool for identifying web application vulnerabilities (e.g., XSS). While security is part of GDPR, this tool's scope is too narrow to address all GDPR requirements.

References

1. Official GDPR Text: Regulation (EU) 2016/679 (General Data Protection Regulation), Article 25, "Data protection by design and by default," paragraph 1 states, "...the controller shall...implement appropriate technical and organisational measures...which are designed to implement data-protection principles...in an effective manner and to integrate the necessary safeguards into the processing..." This legally mandates the action described in option D. (Source: Official Journal of the European Union, EUR-Lex)

2. Google Cloud Vendor Documentation: In the "Google Cloud & the General Data Protection Regulation (GDPR)" documentation, under the "Our shared responsibility" section, it clarifies: "While Google Cloud is responsible for the security of the cloud, you are responsible for security in the cloud... you, as a Google Cloud customer, are responsible for the applications that you build on our platform." This directly refutes the idea of "pass-on" compliance and emphasizes the customer's design responsibility. (Source: Google Cloud Security Documentation)

3. Google Cloud Vendor Documentation: The "Security foundations guide" outlines the shared responsibility model, stating that the customer is responsible for areas such as "Data classification and protection" and "Application-level controls." These are core components of designing a system for compliance. (Source: Google Cloud Security Foundations Guide)

Question 28

You need to set up Microsoft SQL Server on GCP. Management requires that thereโ€™s no downtime in case of a data center outage in any of the zones within a GCP region. What should you do?
Options
A: Configure a Cloud SQL instance with high availability enabled.
B: Configure a Cloud Spanner instance with a regional instance configuration.
C: Set up SQL Server on Compute Engine, using Always On Availability Groups using Windows Failover Clustering. Place nodes in different subnets.
D: Set up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes in different zones.
Show Answer
Correct Answer:
Set up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes in different zones.
Explanation
To ensure no downtime during a zonal outage for Microsoft SQL Server, a high-availability architecture that spans multiple zones is required. Setting up SQL Server on Compute Engine with an Always On Availability Group (AG) and placing the cluster nodes in different zones within the same region directly addresses this. This configuration, used with Windows Server Failover Clustering (WSFC), allows for synchronous data replication and automatic, rapid failover (typically within seconds) if the primary node's zone fails. This architecture provides the highest level of availability and the lowest recovery time objective (RTO), aligning best with the strict "no downtime" requirement.
Why Incorrect Options are Wrong

A. Cloud SQL HA provides zonal resilience, but its automatic failover process typically takes several minutes to complete, which does not meet the strict "no downtime" requirement.

B. Cloud Spanner is a globally distributed, NewSQL database service. It is not Microsoft SQL Server and therefore does not fulfill the primary requirement of the question.

C. Placing nodes in different subnets is insufficient. Multiple subnets can exist within the same zone, so this configuration does not guarantee protection against a zonal outage.

References

1. Google Cloud Documentation - Architecting disaster recovery for Microsoft SQL Server on Google Cloud: This document explicitly details the recommended high-availability pattern. It states, "For a high availability (HA) scenario, you can deploy a SQL Server Always On availability group across multiple zones within a single Google Cloud region. An availability group provides a single point of connectivity... and automates failover in the event of a failure." This directly supports placing AG nodes in different zones (Option D).

2. Google Cloud Documentation - High availability for Cloud SQL: The official documentation describes the failover process for Cloud SQL HA. Under the "Failover process" section, it notes: "The failover operation takes, on average, several minutes to complete." This confirms that while Cloud SQL HA provides zonal redundancy, its recovery time is not instantaneous, making it less suitable than an Always On AG for a "no downtime" scenario (making Option A less correct than D).

3. Google Cloud Documentation - Options for deploying SQL Server on Google Cloud: This guide compares Cloud SQL and SQL Server on Compute Engine. For Compute Engine, it highlights "Full control over the database and the operating system" and "Support for SQL Server Always On availability groups," which are necessary for the architecture described in Option D. It positions this IaaS approach as the solution for maximum control and specific HA configurations not met by the managed service.

Question 29

The development team has provided you with a Kubernetes Deployment file. You have no infrastructure yet and need to deploy the application. What should you do?
Options
A: Use gcloud to create a Kubernetes cluster. Use Deployment Manager to create the deployment.
B: Use gcloud to create a Kubernetes cluster. Use kubect1 to create the deployment.
C: Use kubect1 to create a Kubernetes cluster. Use Deployment Manager to create the deployment.
D: Use kubect1 to create a Kubernetes cluster. Use kubect1 to create the deployment.
Show Answer
Correct Answer:
Use gcloud to create a Kubernetes cluster. Use kubect1 to create the deployment.
Explanation
The task requires a two-step process: first, provisioning the infrastructure (a Kubernetes cluster), and second, deploying the application using the provided file. 1. Infrastructure Provisioning: Google Kubernetes Engine (GKE) clusters are Google Cloud resources. The standard and recommended command-line tool for creating and managing Google Cloud resources is gcloud. The command gcloud container clusters create is used to provision a new GKE cluster. 2. Application Deployment: kubectl is the standard, native command-line tool for interacting with any Kubernetes cluster's API server. To deploy an application defined in a manifest file (like a Deployment file), the kubectl apply -f command is the correct and most direct method. This two-step workflowโ€”using gcloud for infrastructure and kubectl for workloadsโ€”is the standard practice for GKE.
Why Incorrect Options are Wrong

A. Using Deployment Manager is an indirect method for applying a Kubernetes manifest. The standard tool is kubectl, which directly interacts with the Kubernetes API.

C. kubectl is used to manage resources within an existing Kubernetes cluster; it cannot be used to create the cluster infrastructure itself.

D. kubectl cannot create the GKE cluster. While it is the correct tool for the deployment, the first step of the proposed action is incorrect.

References

1. Google Cloud Documentation - Creating a zonal cluster: This official guide explicitly demonstrates using the gcloud container clusters create command to provision a new GKE cluster.

Source: Google Cloud, Google Kubernetes Engine Documentation, "Creating a zonal cluster", Section: "Create a zonal cluster".

2. Google Cloud Documentation - Deploying an application: This document outlines the standard procedure for deploying a stateless application to a GKE cluster, specifying the use of kubectl apply -f [MANIFESTFILE] after the cluster has been created.

Source: Google Cloud, Google Kubernetes Engine Documentation, "Deploying a stateless Linux application", Section: "Deploy the application".

3. Kubernetes Documentation - Declarative Management of Kubernetes Objects Using Configuration Files: This official Kubernetes documentation explains that kubectl apply is the recommended command for managing applications from manifest files.

Source: Kubernetes.io, Documentation, "Tasks > Manage Kubernetes Objects > Declarative Management", Section: "How to apply a configuration".

4. University of California, Berkeley, Data 102: Data, Inference, and Decisions Courseware: Lab materials for cloud computing often demonstrate this standard workflow. For example, instructions for setting up a GKE cluster consistently use gcloud for cluster creation and kubectl for deploying applications.

Source: UC Berkeley, Data 102, Fall 2020, Lab 08, Section: "Setting up your Kubernetes Cluster". The lab instructs students to first run gcloud container clusters create and then kubectl apply.

Question 30

You need to evaluate your team readiness for a new GCP project. You must perform the evaluation and create a skills gap plan incorporates the business goal of cost optimization. Your team has deployed two GCP projects successfully to date. What should you do?
Options
A: Allocate budget for team training. Set a deadline for the new GCP project.
B: Allocate budget for team training. Create a roadmap for your team to achieve Google Cloud certification based on job role.
C: Allocate budget to hire skilled external consultants. Set a deadline for the new GCP project.
D: Allocate budget to hire skilled external consultants. Create a roadmap for your team to achieve Google Cloud certification based on job role.
Show Answer
Correct Answer:
Allocate budget for team training. Create a roadmap for your team to achieve Google Cloud certification based on job role.
Explanation
This approach directly addresses the core requirements of evaluating team readiness and creating a skills gap plan while incorporating cost optimization. Creating a role-based certification roadmap is a structured, measurable method to upskill the existing team. This builds long-term, in-house expertise, which is more cost-effective than continuously relying on expensive external consultants. Google Cloud certifications are designed around best practices, including the principles of the Google Cloud Architecture Framework's cost optimization pillar. By investing in the team's skills, the organization ensures that future projects are designed and managed with cost efficiency as a foundational principle, directly supporting the stated business goal.
Why Incorrect Options are Wrong

A. Setting a deadline is a project management task, not a skills gap plan. It fails to provide a structured approach to ensure the team has the necessary competencies to meet that deadline successfully.

C. Hiring external consultants is a short-term solution that is generally more expensive than training an internal team. It does not build sustainable, in-house capability, thus conflicting with the long-term goal of cost optimization.

D. While a certification roadmap is good, pairing it with hiring consultants makes it a less cost-optimal solution. The primary focus should be on developing the existing team to foster self-sufficiency and reduce long-term costs.

References

1. Google Cloud Adoption Framework: The framework's "People" theme emphasizes the importance of training and developing internal teams. It states, "As your organization adopts the cloud, you need to help your people learn new skills... A training plan helps you to methodically upskill your teams." This supports investing in team training over external hires for long-term success. (Source: Google Cloud Adoption Framework whitepaper, "Phase 2: Plan and foundation," Section: "The People theme").

2. Google Cloud Architecture Framework - Cost Optimization Pillar: This document outlines principles for building cost-effective solutions on Google Cloud. The skills validated by certifications, particularly the Professional Cloud Architect, are directly aligned with these principles, such as "Control resource costs" and "Optimize resource costs." A certified team is better equipped to apply these principles. (Source: Google Cloud Architecture Framework documentation, "Cost optimization pillar," Section: "Overview of the pillar").

3. Google Cloud Certifications: Official documentation states that "Google Cloud certification validates your expertise and shows you can design, develop, manage, and administer application infrastructure and data solutions on Google Cloud technology." The role-based nature of these certifications ensures that team members acquire the specific skills needed for their function, which is the essence of a skills gap plan. (Source: Google Cloud, "Cloud Certifications" official page, "Grow your career" section).

Question 31

You are designing an application for use only during business hours. For the minimum viable product release, youโ€™d like to use a managed product that automatically โ€œscales to zeroโ€ so you donโ€™t incur costs when there is no activity. Which primary compute resource should you choose?
Options
A: Cloud Functions
B: Compute Engine
C: Kubernetes Engine
D: AppEngine flexible environment
Show Answer
Correct Answer:
Cloud Functions
Explanation
Cloud Functions is a serverless, event-driven compute platform that automatically manages the underlying infrastructure. Its core pricing model is based on invocations, compute time, and resources consumed only during execution. This means that if there are no requests to the application, no functions are triggered, and therefore no instances are running, resulting in zero compute cost. This "scale to zero" capability is ideal for applications with intermittent traffic, such as one used only during business hours, as it completely eliminates costs during idle periods.
Why Incorrect Options are Wrong

B. Compute Engine: Virtual machines in Compute Engine incur costs as long as they are in a running state, even if idle. While they can be stopped, this is not an automatic, traffic-based scaling action and does not meet the "scales to zero" requirement.

C. Kubernetes Engine: A Google Kubernetes Engine (GKE) cluster requires at least one running node in its node pools to function, which incurs costs 24/7. It does not automatically scale the entire cluster infrastructure to zero when there is no traffic.

D. App Engine flexible environment: The App Engine flexible environment is designed for applications that require continuous availability and is configured to have at least one instance running at all times. It cannot automatically scale down to zero instances.

References

1. Cloud Functions Pricing: "With Cloud Functions, you pay only for the time your code runs, metered to the nearest 100 milliseconds. When your code is not running, you don't pay anything."

Source: Google Cloud Documentation, "Cloud Functions pricing", Section: "Pricing details".

2. App Engine Flexible Environment Scaling: "Scaling down to zero instances: To save costs for an app that receives no traffic, you can scale down to zero instances. This feature is only available in the standard environment."

Source: Google Cloud Documentation, "Comparing the standard and flexible environments", Feature comparison table.

3. Compute Engine Pricing: "For vCPUs and for memory, Compute Engine charges for a minimum of 1 minute. ... After 1 minute, instances are charged in 1-second increments." This confirms that a running instance is always being charged for.

Source: Google Cloud Documentation, "VM instances pricing", Section: "Billing model for virtual machine instances".

4. Kubernetes Engine Pricing: "In Autopilot mode, you pay for the vCPU, memory, and ephemeral storage resources that your Pods request while they are running. In Standard mode, you pay for each node at the standard Compute Engine price, regardless of whether Pods are running on the nodes." This shows a persistent cost for the underlying nodes or a cluster fee.

Source: Google Cloud Documentation, "Google Kubernetes Engine (GKE) pricing", Section: "Pricing overview".

Question 32

You are creating an App Engine application that uses Cloud Datastore as its persistence layer. You need to retrieve several root entities for which you have the identifiers. You want to minimize the overhead in operations performed by Cloud Datastore. What should you do?
Options
A: Create the Key object for each Entity and run a batch get operation
B: Create the Key object for each Entity and run multiple get operations, one operation for each entity
C: Use the identifiers to create a query filter and run a batch query operation
D: Use the identifiers to create a query filter and run multiple query operations, one operation for each entity
Show Answer
Correct Answer:
Create the Key object for each Entity and run a batch get operation
Explanation
The most efficient method to retrieve multiple Cloud Datastore entities when their unique identifiers are known is to perform a batch get operation. This involves first constructing the full Key object for each entity from its identifier. Then, these keys are passed in a single list to a batch get call (e.g., getmulti). This approach is optimal because it retrieves all the requested entities in a single round trip to the database, significantly minimizing latency and operational overhead compared to making individual requests or using queries. A get by key is a direct lookup and is fundamentally faster and more cost-effective than a query, which needs to scan an index.
Why Incorrect Options are Wrong

B: Running multiple get operations is inefficient. It incurs the overhead of a separate network round trip for each entity, increasing latency and cost.

C: Using a query is less efficient than a direct get by key. A query must scan an index to find the entities, whereas a get is a direct lookup.

D: This is the least efficient option. It combines the higher overhead of using queries instead of gets with the high latency of multiple individual database calls.

References

1. Google Cloud Documentation, "Best practices for Cloud Datastore mode", Section: "Use batch operations": "Use batch operations for reads, writes, and deletes instead of single-entity operations. Batch operations are more efficient because they perform multiple operations with the same overhead as a single operation." This supports using a "batch...operation" over multiple individual operations.

2. Google Cloud Documentation, "Retrieving an entity": This page details the lookup (get) operation. It states, "To retrieve an entity from Datastore mode when you know the key, use the lookup method." It also describes how to perform a batch lookup by providing multiple keys in a single request, which is the most efficient retrieval method by key.

3. Google Cloud Documentation, "Datastore mode queries": This documentation clarifies that queries are used to retrieve entities that meet a specified set of conditions on their properties and keys. This is inherently less efficient than a direct lookup (get) when the full key is already known.

Question 33

You need to upload files from your on-premises environment to Cloud Storage. You want the files to be encrypted on Cloud Storage using customer-supplied encryption keys. What should you do?
Options
A: Supply the encryption key in a .boto configuration file. Use gsutil to upload the files.
B: Supply the encryption key using gcloud config. Use gsutil to upload the files to that bucket.
C: Use gsutil to upload the files, and use the flag --encryption-key to supply the encryption key.
D: Use gsutil to create a bucket, and use the flag --encryption-key to supply the encryption key. Use gsutil to upload the files to that bucket.
Show Answer
Correct Answer:
Supply the encryption key in a .boto configuration file. Use gsutil to upload the files.
Explanation
To upload files to Cloud Storage using Customer-Supplied Encryption Keys (CSEK) with the gsutil tool, the encryption key must be provided with the request. The standard and secure method for doing this with gsutil is to specify the base64-encoded AES-256 key in the .boto configuration file. By adding the encryptionkey parameter under the [GSUtil] section of the file, gsutil will automatically include this key in the headers of all subsequent requests to Cloud Storage, ensuring the objects are encrypted with the specified key upon upload.
Why Incorrect Options are Wrong

B. gcloud config is used to manage properties for the gcloud command-line interface, not for configuring specific gsutil settings like CSEK, which are handled by the .boto file.

C. The gsutil command does not have an --encryption-key flag. Passing keys directly on the command line is also a security risk as it can be exposed in shell history.

D. CSEK is applied on a per-object basis during upload or rewrite, not as a default property when a bucket is created. The --encryption-key flag is also invalid for bucket creation.

References

1. Google Cloud Documentation, "Customer-supplied encryption keys": Under the section "Using keys with gsutil," the documentation explicitly states: "You can use customer-supplied encryption keys with the gsutil command-line tool. Place your secret key in the ~/.boto configuration file." It provides the exact format:

[GSUtil]

encryptionkey = [YOURKEY]

This directly supports option A as the correct procedure.

2. Google Cloud Documentation, "gsutil: Edit the .boto configuration file": This document details the various configuration options available in the .boto file. It lists encryptionkey as a valid parameter within the [GSUtil] section, used for "The customer-supplied encryption key to be used for all requests." This confirms that the .boto file is the correct location for this configuration.

Question 34

Your customer wants to capture multiple GBs of aggregate real-time key performance indicators (KPIs) from their game servers running on Google Cloud Platform and monitor the KPIs with low latency. How should they capture the KPIs?
Options
A: Store time-series data from the game servers in Google Bigtable, and view it using Google Data Studio.
B: Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver Monitoring Console to view them.
C: Schedule BigQuery load jobs to ingest analytics files uploaded to Cloud Storage every ten minutes, and visualize the results in Google Data Studio.
D: Insert the KPIs into Cloud Datastore entities, and run ad hoc analysis and visualizations of them in Cloud Datalab.
Show Answer
Correct Answer:
Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver Monitoring Console to view them.
Explanation
The core requirement is to capture and monitor real-time Key Performance Indicators (KPIs) with low latency. Google Cloud Monitoring is the purpose-built service designed specifically for this use case. It allows applications to send high volumes of custom metrics (the KPIs) via its API for real-time ingestion. The Cloud Monitoring console provides highly configurable, low-latency dashboards for immediate visualization and alerting. This solution directly and efficiently meets all stated requirements: high-volume, real-time capture, and low-latency monitoring, making it the most appropriate choice.
Why Incorrect Options are Wrong

A. While Bigtable is excellent for storing large volumes of time-series data, Google Data Studio is a business intelligence tool with data freshness delays, making it less suitable for the "low-latency monitoring" requirement compared to native Cloud Monitoring dashboards.

C. This option describes a batch processing pipeline. Loading files from Cloud Storage every ten minutes introduces significant delay, which violates the "real-time" and "low latency" requirements of the scenario.

D. Cloud Datastore is a transactional NoSQL database not optimized for the high-throughput, analytical query patterns of time-series monitoring. Cloud Datalab is an interactive analysis environment, not a real-time monitoring dashboard.

References

1. Cloud Monitoring for Custom Metrics: Official Google Cloud documentation states, "You can instrument your application to send custom metrics to Cloud Monitoring... You can then use the data from custom metrics in charts and alerting policies." This directly supports using Cloud Monitoring to capture and view KPIs.

Source: Google Cloud Documentation, "Custom metrics overview", Section: "What are custom metrics?".

2. Cloud Monitoring Dashboards: "You can display the metric data that you've collected as charts on a custom dashboard. The console provides both predefined dashboards and a Dashboards page where you can create and modify custom dashboards." This supports the low-latency monitoring requirement.

Source: Google Cloud Documentation, "Dashboards and charts", Section: "Custom dashboards".

3. Bigtable for Time-Series Data: Google Cloud documentation positions Bigtable for time-series workloads, particularly for "large-scale time-series data, financial analysis, [and] IoT data," often as a backend for applications or heavy analytics, not typically for direct, low-latency dashboarding via a BI tool.

Source: Google Cloud Documentation, "Schema design for time series data" in Cloud Bigtable.

4. BigQuery Batch Loading: Loading data from Cloud Storage into BigQuery is a batch operation. "Loading data from Cloud Storage is a free operation, but you are charged for storing the data in Cloud Storage." This method is not designed for real-time ingestion.

Source: Google Cloud Documentation, "Introduction to loading data from Cloud Storage" for BigQuery.

Question 35

You have a Python web application with many dependencies that requires 0.1 CPU cores and 128 MB of memory to operate in production. You want to monitor and maximize machine utilization. You also to reliably deploy new versions of the application. Which set of steps should you take?
Options
A: Perform the following: 1) Create a managed instance group with f1-micro type machines. 2) Use a startup script to clone the repository, check out the production branch, install the dependencies, and start the Python app. 3) Restart the instances to automatically deploy new production releases.
B: Perform the following: 1) Create a managed instance group with n1-standard-1 type machines. 2) Build a Compute Engine image from the production branch that contains all of the dependencies and automatically starts the Python app. 3) Rebuild the Compute Engine image, and update the instance template to deploy new production releases.
C: Perform the following: 1) Create a Kubernetes Engine cluster with n1-standard-1 type machines. 2) Build a Docker image from the production branch with all of the dependencies, and tag it with the version number. 3) Create a Kubernetes Deployment with the imagePullPolicy set to โ€œIfNotPresentโ€ in the staging namespace, and then promote it to the production namespace after testing.
D: Perform the following: 1) Create a Kubernetes Engine (GKE) cluster with n1-standard-4 type machines. 2) Build a Docker image from the master branch will all of the dependencies, and tag it with โ€œlatestโ€. 3) Create a Kubernetes Deployment in the default namespace with the imagePullPolicy set to โ€œAlwaysโ€. Restart the pods to automatically deploy new production releases.
Show Answer
Correct Answer:
Perform the following: 1) Create a Kubernetes Engine cluster with n1-standard-1 type machines. 2) Build a Docker image from the production branch with all of the dependencies, and tag it with the version number. 3) Create a Kubernetes Deployment with the imagePullPolicy set to โ€œIfNotPresentโ€ in the staging namespace, and then promote it to the production namespace after testing.
Explanation
This solution correctly identifies Google Kubernetes Engine (GKE) as the optimal platform for this scenario. GKE's container orchestration capabilities are specifically designed to maximize machine utilization by efficiently "bin-packing" many small, low-resource workloads (like the one described) onto larger nodes. Building version-tagged Docker images creates immutable, reproducible artifacts, which is a best practice for reliable deployments. Furthermore, using a Kubernetes Deployment with a staging-to-production namespace promotion workflow provides a robust, controlled, and reliable method for releasing new application versions, directly satisfying all requirements of the question.
Why Incorrect Options are Wrong

A: Running a single small application per f1-micro VM is highly inefficient due to OS overhead and does not maximize utilization. Deploying via startup scripts is also unreliable and not easily repeatable.

B: While using custom VM images is a reliable pattern, a Managed Instance Group is less efficient at maximizing resource utilization for many small workloads compared to the bin-packing capabilities of GKE.

D: This option uses GKE but implements anti-patterns that undermine reliability. Using the :latest tag for production images is strongly discouraged as it is mutable, making rollouts and rollbacks unpredictable. Restarting pods is a crude, imperative action, not the proper declarative method of updating a Deployment manifest to trigger a controlled rollout.

References

1. GKE for Resource Utilization: Google Cloud documentation highlights GKE's ability to optimize resource usage. "GKE's cluster autoscaler automatically resizes the number of nodes in a given node pool, based on the demands of your workloads... When resources are underutilized, cluster autoscaler scales down, moving workloads to other nodes and removing unneeded nodes." This bin-packing and scaling maximizes utilization.

Source: Google Cloud Documentation, "Cluster autoscaler".

2. Best Practices for Container Images: The official Google Cloud documentation on building containers explicitly advises against using the :latest tag for production deployments. "We recommend that you tag your images with a version number... Because the :latest tag is a moving pointer, it's difficult to track which version of an image is running and it's difficult to roll back." This directly invalidates the approach in option D.

Source: Google Cloud Documentation, "Best practices for building containers", section "Tagging images".

3. Reliable Deployments with GKE: A Kubernetes Deployment is a declarative object. To reliably deploy a new version, you update the image tag in the Deployment's Pod template. Kubernetes then manages a controlled rolling update. "When you update the Pod template for a Deployment, the Deployment triggers a rollout to update its Pods to the new version." This is the correct, reliable method, contrasting with the manual pod restart suggested in option D.

Source: Google Cloud Documentation, "Updating a Deployment".

4. Using Namespaces for Environments: The official Kubernetes documentation (which GKE is built upon) describes using namespaces to isolate environments like staging and production. "Namespaces are intended for use in environments with many users spread across multiple teams, or projects." This supports the robust promotion workflow described in option C.

Source: Kubernetes.io Documentation, "Namespaces".

Shopping Cart
Scroll to Top

FLASH OFFER

Days
Hours
Minutes
Seconds

avail $6 DISCOUNT on YOUR PURCHASE