Google Cloud Architect Exam Questions 2025

Updated:

Our Google Cloud Architect Exam Questions provide authentic, up-to-date content for the Google Cloud Certified – Professional Cloud Architect certification. Each question is reviewed by certified Google Cloud professionals and includes verified answers with detailed explanations to enhance your understanding of designing, developing, and managing scalable cloud solutions on Google Cloud. With access to our exam simulator, you can practice under real exam conditions and confidently prepare to pass on your first attempt.

Exam Questions

Question 1

A lead software engineer tells you that his new application design uses websockets and HTTP sessions that are not distributed across the web servers. You want to help him ensure his application will run property on Google Cloud Platform. What should you do?
Options
A: Help the engineer to convert his websocket code to use HTTP streaming.
B: Review the encryption requirements for websocket connections with the security team.
C: Meet with the cloud operations team and the engineer to discuss load balancer options.
D: Help the engineer redesign the application to use a distributed user session service that does not rely on websockets and HTTP sessions.
Show Answer
Correct Answer:
Meet with the cloud operations team and the engineer to discuss load balancer options.
Explanation
The application's architecture has two specific requirements that must be addressed at the infrastructure level: support for the WebSocket protocol and a mechanism to handle non-distributed HTTP sessions. The latter requirement implies the need for session affinity (or "sticky sessions"), ensuring that requests from a specific user are consistently routed to the same backend server. Google Cloud Load Balancing services are designed to manage this type of traffic distribution. Specifically, services like the External HTTP(S) Load Balancer natively support the WebSocket protocol and provide multiple options for configuring session affinity (e.g., based on generated cookies, client IP, or HTTP headers). Therefore, the most direct and appropriate action is to discuss these load balancer options to find a configuration that supports the application's existing design without requiring a code rewrite.
Why Incorrect Options are Wrong

A. This suggests an unnecessary application redesign. Google Cloud's load balancers provide native support for websockets, so converting the code to a different protocol is not required.

B. While security is important, reviewing encryption does not solve the core architectural problem of routing traffic correctly to support websockets and non-distributed sessions.

D. This proposes a significant and costly application redesign as the first step. A Cloud Architect should first seek to support the existing application requirements with appropriate infrastructure before recommending a complete rewrite.

References

1. Google Cloud Documentation, "External HTTP(S) Load Balancer overview": This document explicitly states the load balancer's capabilities. Under the "Features" section, it lists "WebSocket support." The documentation explains: "Google Cloud Armor and Cloud CDN can be used with WebSockets. The WebSocket protocol... provides a full-duplex communication channel between a client and a server. The channel is initiated from an HTTP(S) request. The External HTTP(S) Load Balancer has native support for the WebSocket protocol."

2. Google Cloud Documentation, "Session affinity": This page details how to configure session affinity for various load balancers. For the Global External HTTP(S) Load Balancer, it states: "Session affinity sends all requests from the same client to the same virtual machine (VM) instance or endpoint... This is useful for applications that require stateful sessions." It then describes the different types, including Generated cookie affinity, Header field affinity, and HTTP cookie affinity, which directly address the "non-distributed sessions" requirement.

3. Google Cloud Architecture Framework, "System design pillar": This framework emphasizes selecting the right Google Cloud products and services to meet design requirements. The "Networking and connectivity principles" section guides architects to "Choose the right load balancer for your needs." This aligns with option C, which involves evaluating load balancer options to fit the application's specific websocket and session state requirements.

Question 2

You want to enable your running Google Container Engine cluster to scale as demand for your application changes. What should you do?
Options
A: Add additional nodes to your Container Engine cluster using the following command: gcloud container clusters resize CLUSTER_NAME --size 10
B: Add a tag to the instances in the cluster with the following command: gcloud compute instances add-tags INSTANCE --tags enable --autoscaling max-nodes-10
C: Update the existing Container Engine cluster with the following command: gcloud alpha container clusters update mycluster --enable-autoscaling --min-nodes=1 --max- nodes=10
D: Create a new Container Engine cluster with the following command: gcloud alpha container clusters create mycluster --enable-autocaling --min-nodes=1 --max-nodes=10 and redeploy your application.
Show Answer
Correct Answer:
Update the existing Container Engine cluster with the following command: gcloud alpha container clusters update mycluster --enable-autoscaling --min-nodes=1 --max- nodes=10
Explanation
The most direct and appropriate method to enable automatic scaling on an existing, running Google Kubernetes Engine (GKE) cluster is to update its configuration. The gcloud container clusters update command is designed for this purpose. Using the --enable-autoscaling flag along with --min-nodes and --max-nodes parameters allows the cluster autoscaler to be activated and configured with the desired scaling boundaries. This modifies the cluster in-place without requiring workload migration.
Why Incorrect Options are Wrong

A: The resize command performs a one-time, manual scaling operation to a fixed number of nodes. It does not enable the cluster to scale automatically based on workload demand.

B: Adding tags to individual Compute Engine instances is used for networking rules or organization, not for enabling the GKE cluster autoscaler, which is a managed feature of the cluster itself.

D: This command creates an entirely new cluster. While it correctly enables autoscaling, it does not modify the running cluster as requested and would require a disruptive migration of all applications.

References

1. Google Cloud Documentation - Autoscaling a cluster: The official documentation explicitly provides the command for enabling cluster autoscaling on an existing cluster: gcloud container clusters update CLUSTERNAME --enable-autoscaling --min-nodes=MINNODES --max-nodes=MAXNODES. This directly supports option C as the correct procedure. (See "Enabling cluster autoscaling for an existing cluster" section).

Source: Google Cloud, "Autoscaling a cluster", cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler.

2. Google Cloud SDK Documentation - gcloud container clusters update: The reference for this command confirms its purpose is to "Update settings for a cluster" and lists --enable-autoscaling, --max-nodes, and --min-nodes as valid flags for managing the autoscaler.

Source: Google Cloud SDK, "gcloud container clusters update", cloud.google.com/sdk/gcloud/reference/container/clusters/update.

3. Google Cloud SDK Documentation - gcloud container clusters resize: This documentation clarifies that the resize command is for manual scaling: "This command is used for manual scaling. You can use this command to increase or decrease the number of nodes in a cluster." This confirms why option A is incorrect for automatic scaling.

Source: Google Cloud SDK, "gcloud container clusters resize", cloud.google.com/sdk/gcloud/reference/container/clusters/resize.

Question 3

You deploy your custom java application to google app engine. It fails to deploy and gives you the following stack trace: Google PROFESSIONAL CLOUD ARCHITECT exam question
Options
A: Recompile the CLoakedServlet class using and MD5 hash instead of SHA1
B: Digitally sign all of your JAR files and redeploy your application.
C: Upload missing JAR files and redeploy your application
Show Answer
Correct Answer:
Digitally sign all of your JAR files and redeploy your application.
Explanation
The stack trace indicates a java.lang.SecurityException because the SHA1 digest of a JAR file does not match the expected digest listed in the application's deployment descriptor (META-INF/application.xml). This is a file integrity verification failure. The application server, in this case, the App Engine runtime, is unable to verify that the JAR file is authentic and has not been tampered with. Digitally signing the JAR files with a trusted certificate creates a manifest with correct digests and a signature to guarantee their integrity. Redeploying with properly signed JARs will allow the runtime to successfully verify the files, resolving the security exception.
Why Incorrect Options are Wrong

A. The error is with the javax.servlet-api-3.0.1.jar file's integrity, not a custom class. Changing the hash algorithm for a single class is irrelevant to the JAR verification process.

C. The error is a digest mismatch (does not match), not a ClassNotFoundException. This confirms the file is present but has failed an integrity check, rather than being missing.

References

1. Oracle Java Documentation, "Signing and Verifying JAR Files": This document explains the purpose of JAR signing. It states, "You can sign JAR files to ensure their integrity and authenticity... When a signed JAR file is loaded, the Java runtime can verify the signature to ensure that the file's contents have not been changed since it was signed." The SecurityException in the question is a direct result of this verification failing. (Source: Oracle, JDK 8 Documentation, The Java Tutorials, Deployment, Signing and Verifying JAR Files).

2. Google Cloud Documentation, "Java 8 Runtime Environment": The error log references the WEB-INF/lib/ directory, which is a standard part of the required application directory structure for Java applications on App Engine. This confirms the context is a standard Java web application deployment where such integrity checks are common. (Source: Google Cloud, App Engine standard environment for Java 8 documentation, "The Java 8 Runtime Environment", Section: "Organizing your files").

3. Princeton University, COS 432: Information Security, Lecture 18: Java Security: This courseware discusses the Java security model, including the sandbox, SecurityManager, and code signing. It explains that the SecurityManager is responsible for throwing a SecurityException when a security policy is violated, such as when code integrity cannot be verified via its signature. (Source: Princeton University, Department of Computer Science, COS 432, Lecture 18, Slides 15-18 on "Code Signing").

Question 4

You are designing a mobile chat application. You want to ensure people cannot spoof chat messages, by providing a message were sent by a specific user. What should you do?

Options
A:

A. Tag messages client side with the originating user identifier and the destination user.

B:

B. Encrypt the message client side using block-based encryption with a shared key.

C:

C. Use public key infrastructure (PKI) to encrypt the message client side using the originating user's private key.

D:

D. Use a trusted certificate authority to enable SSL connectivity between the client application and the server.

Show Answer
Correct Answer:
C. Use public key infrastructure (PKI) to encrypt the message client side using the originating user's private key.
Explanation
The question requires a solution to prevent message spoofing by proving a message was sent by a specific user. This is a classic use case for digital signatures, which provide authenticity and non-repudiation. The process described in option C, using Public Key Infrastructure (PKI) to encrypt with the originating user's private key, is the definition of creating a digital signature. The recipient can then use the sender's public key to verify the signature, confirming that only the holder of the private key could have sent the message. This cryptographically proves the sender's identity and prevents them from denying they sent the message.
Why Incorrect Options are Wrong

A. Tagging messages with a user identifier on the client side is insecure metadata. A malicious user can easily modify the client application to forge this tag.

B. Shared key (symmetric) encryption provides confidentiality, ensuring only those with the key can read the message. It does not prove origin, as anyone with the shared key could have created the message.

D. SSL/TLS secures the communication channel between the client and the server (data in transit). It does not cryptographically sign the individual messages to prove the user's identity to other chat participants.

References

1. Google Cloud Documentation, Cloud Key Management Service, "Digital signatures": "Digital signatures are commonly used to verify the integrity and authenticity of data. For example, you can use digital signatures to verify that a binary was released by a specific developer... A private key is used to create a digital signature, and the corresponding public key is used to validate the signature." This directly supports the mechanism described in option C for proving origin.

2. Google Cloud Documentation, Cloud Key Management Service, "Asymmetric encryption": This document distinguishes between symmetric and asymmetric keys, stating, "Asymmetric keys can be used for either asymmetric encryption or for creating digital signatures." This clarifies that the PKI approach (asymmetric keys) is the correct tool for signatures, unlike the symmetric approach in option B.

3. MIT OpenCourseWare, 6.857 Computer and Network Security, Fall 2017, Lecture 8: Public-Key Cryptography: The lecture notes state that a digital signature scheme provides "(1) Authentication (of origin), (2) Integrity (of message), (3) Non-repudiation (by origin)." The process is defined as Sign(SK, M) where SK is the secret (private) key, which aligns perfectly with option C's methodology.

4. Google Cloud Security Whitepaper, "Encryption in Transit in Google Cloud": This paper details how Google Cloud uses TLS to secure data in transit (Section: "Default encryption in transit"). This supports the reasoning for why option D is incorrect, as TLS secures the transport layer between two points (e.g., client and server), not the authenticity of the application-layer message itself for end-to-end verification between users.

Question 5

You created a pipeline that can deploy your source code changes to your infrastructure in instance groups for self healing. One of the changes negatively affects your key performance indicator. You are not sure how to fix it and investigation could take up to a week. What should you do?

Options
A:

A. Log in to a server, and iterate a fix locally

B:

B. Change the instance group template to the previous one, and delete all instances.

C:

C. Revert the source code change and rerun the deployment pipeline

D:

D. Log into the servers with the bad code change, and swap in the previous code

Show Answer
Correct Answer:
C. Revert the source code change and rerun the deployment pipeline
Explanation
The most effective and reliable strategy is to treat the rollback as a new deployment. By reverting the problematic change in the source code repository, you maintain a clear and auditable history of the application's state. Rerunning the deployment pipeline leverages your existing automation to build and deploy the last known-good version of the code. This ensures the change is applied consistently across all instances in the self-healing group and aligns with DevOps and Continuous Integration/Continuous Deployment (CI/CD) best practices, where the source control system is the single source of truth.
Why Incorrect Options are Wrong

A. Logging in to a server to iterate a fix is a manual process that is not scalable and will be undone when the instance group's self-healing mechanism replaces the instance.

B. While changing the instance template would work, it's a manual infrastructure-level intervention. The root cause is the application code, and the best practice is to fix the source of truth (the code) and let the pipeline manage the infrastructure changes.

D. Manually swapping code on live servers is an anti-pattern. It is not repeatable, not auditable, and any changes will be lost during self-healing events or subsequent automated deployments.

References

1. Google Cloud Documentation, DevOps tech: Continuous delivery: "A key goal of continuous delivery is to make your release process a low-risk event that you can perform at any time and on demand. ... Because you are deploying smaller changes, you can more easily pinpoint and address bugs and roll back changes if necessary." This supports the principle of rolling back a problematic change through the established process.

Source: Google Cloud, "DevOps tech: Continuous delivery", Section: "What is continuous delivery?".

2. Google, Site Reliability Engineering, Chapter 8 - Release Engineering: "A key feature of our release system is the ability to quickly and safely roll back a release that is found to be bad. ... Rollbacks use the same infrastructure as rollouts, but in reverse." This highlights the best practice of using the same automated system (the pipeline) for rollbacks as for deployments, which is achieved by reverting the code and re-running the pipeline.

Source: Beyer, B., Jones, C., Petoff, J., & Murphy, N. R. (2016). Site Reliability Engineering: How Google Runs Production Systems. O'Reilly Media. Chapter 8, Section: "Rollout Policies".

3. Google Cloud Documentation, Managed instance groups (MIGs): "A managed instance group (MIG) ... maintains high availability of your apps by proactively keeping your VMs (instances) running. If a VM in the group stops, crashes, or is deleted... the MIG automatically recreates it in accordance with the group's instance template". This confirms that any manual changes made directly to an instance (as suggested in options A and D) will be lost.

Source: Google Cloud, "Managed instance groups (MIGs)", Section: "High availability".

Question 6

Your organization wants to control IAM policies for different departments independently, but centrally. Which approach should you take?
Options
A: Multiple Organizations with multiple Folders
B: Multiple Organizations, one for each department
C: A single Organization with Folder for each department
D: A single Organization with multiple projects, each with a central owner
Show Answer
Correct Answer:
A single Organization with Folder for each department
Explanation
The Google Cloud resource hierarchy is designed for this exact use case. A single Organization node serves as the root, enabling centralized governance and application of organization-wide IAM policies. Folders are used to group resources, such as projects, that share common IAM policies. By creating a Folder for each department, you can delegate administrative rights to departmental teams, allowing them to manage their own projects and resources independently within their Folder. This structure provides both the central control required by the organization and the departmental autonomy needed for efficient operations.
Why Incorrect Options are Wrong

A. Multiple Organizations with multiple Folders: Using multiple Organizations breaks the principle of central control, as each Organization is a distinct root entity with its own separate policies.

B. Multiple Organizations, one for each department: This approach completely fragments governance, creating isolated silos for each department and making central IAM policy enforcement impossible.

D. A single Organization with multiple projects, each with a central owner: While this provides central control, it lacks the intermediate grouping layer (Folders) for departments, making policy management at scale inefficient and difficult to delegate.

References

1. Google Cloud Documentation, Resource Manager, "Cloud Platform resource hierarchy": "Folders are an additional grouping mechanism on top of projects... Folders are commonly used to model different departments, teams, or legal entities within a company. For example, a first level of folders could represent the major departments in your organization." This directly supports using folders to represent departments.

2. Google Cloud Documentation, Resource Manager, "Creating and managing folders", Section: "Using folders for access control": "You can use folders to isolate resources for different departments... Access to resources can be limited by department by assigning IAM roles at the folder level. All projects and folders within a parent folder inherit the IAM policies of that folder." This confirms that folders are the correct tool for delegating departmental control.

3. Google Cloud Documentation, IAM, "Policy inheritance": "The resource hierarchy for policy evaluation includes the organization, folders, projects, and resources... The child resource inherits the parent's policy." This explains the mechanism for central control from the Organization node downwards.

4. Google Cloud Architecture Framework, "Design a resource hierarchy for your Google Cloud landing zone": In the "Folder structure" section, a common pattern recommended is: "A folder for each department, such as Dept-A and Dept-B." This establishes the chosen answer as a documented best practice.

Question 7

A recent audit that a new network was created in Your GCP project. In this network, a GCE instance has an SSH port open the world. You want to discover this network's origin. What should you do?
Options
A: Search for Create VM entry in the Stackdriver alerting console.
B: Navigate to the Activity page in the Home section. Set category to Data Access and search for Create VM entry.
C: In the logging section of the console, specify GCE Network as the logging section. Search for the Create Insert entry.
D: Connect to the GCE instance using project SSH Keys. Identify previous logins in system logs, and match these with the project owners list.
Show Answer
Correct Answer:
In the logging section of the console, specify GCE Network as the logging section. Search for the Create Insert entry.
Explanation
To investigate the origin of a newly created Google Cloud resource, the correct tool is Cloud Logging, which captures Admin Activity audit logs. These logs record API calls that modify the configuration or metadata of resources, such as creating a VPC network. By filtering the logs in the Logs Explorer for the resource type gcenetwork and the API method that creates the network (compute.networks.insert), you can pinpoint the exact log entry. This entry will contain the identity of the principal (user, group, or service account) that made the call, the source IP address, and the timestamp, thereby revealing the network's origin.
Why Incorrect Options are Wrong

A. The Cloud Monitoring alerting console is used for configuring and viewing alerts based on metrics, logs, and uptime checks, not for retrospectively searching historical audit logs for specific events.

B. Resource creation events are captured in Admin Activity logs, not Data Access logs. Data Access logs track when data within a resource is read or written, which is not relevant here.

D. Checking system logs on the GCE instance would only show who has logged into the virtual machine, not who created the underlying network or the instance itself. The creator may have never accessed the instance.

References

1. Cloud Audit Logs Overview: "Admin Activity audit logs contain log entries for API calls or other actions that modify the configuration or metadata of resources. For example, these logs record when users create VM instances or change Identity and Access Management permissions." This confirms that creating a network is an Admin Activity.

Source: Google Cloud Documentation, "Cloud Audit Logs", Overview section.

2. Querying Logs in Logs Explorer: The Logs Explorer allows for building queries to find specific log entries. A query to find network creation events would look like: resource.type="gcenetwork" and protoPayload.methodName="v1.compute.networks.insert". This demonstrates the method described in the correct answer.

Source: Google Cloud Documentation, "Build queries in the Logs Explorer", Query for a resource type and log name section.

3. Compute Engine Audited Operations: The official documentation lists the specific API methods that are logged. For creating a network, the method is v1.compute.networks.insert. This validates that searching for an "insert" entry for a GCE Network is the correct procedure.

Source: Google Cloud Documentation, "Compute Engine audit logging information", Audited operations table.

Question 8

As part of implementing their disaster recovery plan, your company is trying to replicate their production MySQL database from their private data center to their GCP project using a Google Cloud VPN connection. They are experiencing latency issues and a small amount of packet loss that is disrupting the replication. What should they do?
Options
A: Configure their replication to use UDP.
B: Configure a Google Cloud Dedicated Interconnect.
C: Restore their database daily using Google Cloud SQL.
D: Add additional VPN connections and load balance them.
E: Send the replicated transaction to Google Cloud Pub/Sub.
Show Answer
Correct Answer:
Configure a Google Cloud Dedicated Interconnect.
Explanation
The core issue is the use of a Cloud VPN connection over the public internet for a latency-sensitive and loss-intolerant workload like production database replication. The public internet provides "best-effort" delivery, which leads to unpredictable latency and packet loss, disrupting the replication process. Google Cloud Dedicated Interconnect establishes a private, direct physical connection to Google's network, bypassing the public internet. This provides a stable, low-latency, and high-bandwidth connection with a Service Level Agreement (SLA), making it the appropriate solution for reliable, enterprise-grade disaster recovery scenarios.
Why Incorrect Options are Wrong

A. Configure their replication to use UDP.

UDP is an unreliable, connectionless protocol that does not guarantee packet delivery or order. Database replication requires the guaranteed, in-order delivery provided by TCP to prevent data corruption.

C. Restore their database daily using Google Cloud SQL.

This is a backup-and-restore strategy, not replication. A daily restore implies a Recovery Point Objective (RPO) of up to 24 hours, which is typically unacceptable for a production database DR plan.

D. Add additional VPN connections and load balance them.

While this can increase aggregate throughput and add redundancy, all connections still traverse the unreliable public internet. It does not solve the fundamental problems of inconsistent latency and packet loss for a single replication stream.

E. Send the replicated transaction to Google Cloud Pub/Sub.

This introduces significant architectural complexity, requiring custom producers and consumers. It does not address the underlying network instability between the on-premises data center and GCP, which is the root cause of the problem.

References

1. Google Cloud Documentation, "Choosing a Network Connectivity product": This document directly compares Cloud VPN and Cloud Interconnect. It states, "Cloud Interconnect provides low latency, high availability connections that enable you to reliably transfer data between your on-premises and Virtual Private Cloud (VPC) networks." It also notes for Cloud VPN, "Because this connection traverses the internet, its performance...can be inconsistent." This supports choosing Interconnect (B) over VPN (D).

2. Google Cloud Documentation, "Dedicated Interconnect overview": This page highlights the key benefits of Dedicated Interconnect: "Lower latency. Traffic between your on-premises and VPC networks doesn't touch the public internet. Instead, it travels over a dedicated connection with lower latency." This directly addresses the problem described in the question.

3. MySQL 8.0 Reference Manual, "Section 19.1.1 Replication Implementation Overview": The MySQL documentation describes the replication process where a replica's I/O thread connects to the source server over the network to read the binary log. This connection relies on a stable, reliable transport protocol like TCP, making UDP (A) an unsuitable choice.

4. Kurose, J. F., & Ross, K. W. (2017). Computer Networking: A Top-Down Approach (7th ed.). Pearson. In Chapter 3, Section 3.3, UDP is described as providing an "unreliable data transfer service," while Section 3.5 describes TCP's "reliable data transfer" service, reinforcing why UDP is incorrect for database replication.

Question 9

Your customer support tool logs all email and chat conversations to Cloud Bigtable for retention and analysis. What is the recommended approach for sanitizing this data of personally identifiable information or payment card information before initial storage?
Options
A: Hash all data using SHA256
B: Encrypt all data using elliptic curve cryptography
C: De-identify the data with the Cloud Data Loss Prevention API
D: Use regular expressions to find and redact phone numbers, email addresses, and credit card numbers
Show Answer
Correct Answer:
De-identify the data with the Cloud Data Loss Prevention API
Explanation
The most appropriate and recommended approach is to use the Cloud Data Loss Prevention (DLP) API. This is a fully managed Google Cloud service specifically designed to discover, classify, and de-identify sensitive data like Personally Identifiable Information (PII) and Payment Card Information (PCI) within unstructured text streams or storage. Cloud DLP uses pre-trained classifiers (infoType detectors) to accurately identify sensitive data and offers various transformation techniques such as redaction, masking, and tokenization. This allows for the sanitization of sensitive information while preserving the utility of the non-sensitive data for analysis, directly addressing the question's requirements before the data is stored in Bigtable.
Why Incorrect Options are Wrong

A. Hash all data using SHA256: Hashing the entire log entry would render the non-sensitive data unusable for the stated purpose of retention and analysis, as it's a one-way, irreversible process.

B. Encrypt all data using elliptic curve cryptography: Encryption protects data but does not sanitize or de-identify it. The sensitive information still exists in an encrypted form and is not removed or transformed for analysis.

D. Use regular expressions to find and redact phone numbers, email addresses, and credit card numbers: This approach is brittle, difficult to maintain, and prone to errors. It cannot reliably detect all formats of sensitive data (e.g., international phone numbers, names) or validate them (e.g., credit card checksums), unlike the managed and sophisticated detectors in Cloud DLP.

References

1. Cloud Data Loss Prevention Documentation - De-identification of sensitive data: "You can use Cloud DLP to de-identify sensitive data in your content. De-identification is the process of removing identifying information from data. Its goal is to allow the sharing and use of personal data while protecting privacy." This page details methods like redaction and masking, which are ideal for the scenario.

Source: Google Cloud Documentation, "De-identification of sensitive data".

2. Cloud Data Loss Prevention Documentation - InfoType detector reference: This document lists the extensive built-in detectors for PII and PCI data, such as CREDITCARDNUMBER, EMAILADDRESS, and PHONENUMBER. This demonstrates why Cloud DLP is superior to custom regular expressions.

Source: Google Cloud Documentation, "InfoType detector reference".

3. Google Cloud Architecture Framework - Security, privacy, and compliance: "Use Cloud DLP to discover, classify, and redact sensitive data. For example, you can use Cloud DLP to find and redact credit card numbers from a chat transcript before you store the transcript." This directly cites the use case described in the question as a recommended practice.

Source: Google Cloud Architecture Framework, Security pillar, "Implement least privilege".

Question 10

You are using Cloud Shell and need to install a custom utility for use in a few weeks. Where can you store the file so it is in the default execution path and persists across sessions?
Options
A: ~/bin
B: Cloud Storage
C: /google/scripts
D: /usr/local/bin
Show Answer
Correct Answer:
~/bin
Explanation
Google Cloud Shell provides a 5 GB persistent disk mounted as your $HOME directory. This ensures that any files and configurations within $HOME are preserved across sessions. The Cloud Shell environment is pre-configured so that if a ~/bin directory exists, it is automatically added to the system's execution path ($PATH). Therefore, placing a custom utility in ~/bin meets both requirements: the file will persist for future use, and it can be executed directly by name from any location without specifying the full path.
Why Incorrect Options are Wrong

B. Cloud Storage: This is an object storage service, not a local filesystem directory in the execution path. The utility would need to be downloaded to the VM before it could be run.

C. /google/scripts: This directory is part of the ephemeral Cloud Shell virtual machine instance. Any files placed here will be lost when your session ends.

D. /usr/local/bin: While this directory is in the default execution path, it resides on the ephemeral VM's filesystem. It does not persist across sessions, so the utility would be deleted.

References

1. Google Cloud Documentation, "How Cloud Shell works", Section: Persistent disk storage: "Cloud Shell provisions 5 GB of persistent disk storage on your temporarily allocated virtual machine. This storage is located at your $HOME directory and persists between sessions... Any modifications you make to your home directory, including installed software, scripts, and user configuration files like .bashrc and .vimrc, persist between sessions."

2. Google Cloud Documentation, "Cloud Shell features", Section: A pre-configured gcloud CLI and other utilities: This section implies a standard Linux environment. In standard Linux configurations (like the Debian base for Cloud Shell), the default .profile script adds $HOME/bin to the $PATH if the directory exists. This behavior, combined with the persistence of the $HOME directory, makes ~/bin the correct location.

3. Google Cloud Documentation, "Customizing your Cloud Shell environment": This guide explains how to make persistent customizations. It states, "When you start Cloud Shell, a bash shell is run and any commands in ~/.bashrc and ~/.profile are executed." This confirms that standard shell startup scripts, which typically configure the path to include ~/bin, are honored and persist.

Sale!
Total Questions276
Last Update Check November 13, 2025
Online Simulator PDF Downloads
50,000+ Students Helped So Far
$30.00 $50.00 40% off
Rated 4.86 out of 5
4.9 (14 reviews)

Instant Download & Simulator Access

Secure SSL Encrypted Checkout

100% Money Back Guarantee

What Users Are Saying:

Rated 5 out of 5

“The practice questions were spot on. Felt like I had already seen half the exam. Passed on my first try!”

Sarah J. (Verified Buyer)

Download Free Demo PDF Free Cloud-Architect Practice Test
Shopping Cart
Scroll to Top

FLASH OFFER

Days
Hours
Minutes
Seconds

avail $6 DISCOUNT on YOUR PURCHASE