Free Practice Test

Free Associate Cloud Engineer Exam Questions – 2025 Updated

Study Smarter for the Associate Cloud Engineer Exam with Our Free and Reliable Cloud Engineer Exam Questions โ€“ Updated for 2025.

At Cert Empire, we are focused on delivering the most accurate and up-to-date exam questions for students preparing for the Google Associate Cloud Engineer Exam. To make preparation easier, weโ€™ve made parts of our Associate Cloud Engineer exam resources free for everyone. You can practice as much as you like with Free Google Associate Cloud Engineer Practice Test.

Question 1

You have an application that runs on Compute Engine VM instances in a custom Virtual Private Cloud (VPC). Your company's security policies only allow the use to internal IP addresses on VM instances and do not let VM instances connect to the internet. You need to ensure that the application can access a file hosted in a Cloud Storage bucket within your project. What should you do?
Options
A: Enable Private Service Access on the Cloud Storage Bucket.
B: Add slorage.googleapis.com to the list of restricted services in a VPC Service Controls perimeter and add your project to the list to protected projects.
C: Enable Private Google Access on the subnet within the custom VPC.
D: Deploy a Cloud NAT instance and route the traffic to the dedicated IP address of the Cloud Storage bucket.
Show Answer
Correct Answer:
Enable Private Google Access on the subnet within the custom VPC.
Explanation
Private Google Access allows virtual machine (VM) instances with only internal IP addresses to reach the external IP addresses of Google APIs and services, including Cloud Storage. When enabled on a subnet, it provides a path for traffic from the VMs to services like storage.googleapis.com without requiring an external IP address on the VM or routing traffic through the public internet. The traffic remains within Google's private network, satisfying the security requirement of no internet connectivity while enabling access to necessary Google services.
Why Incorrect Options are Wrong

A. Enable Private Service Access on the Cloud Storage Bucket.

Private Service Access is used to connect your VPC to Google-managed services (like Cloud SQL) that reside in a separate, Google-owned VPC. It is not applicable for accessing global APIs like Cloud Storage.

B. Add storage.googleapis.com to the list of restricted services in a VPC Service Controls perimeter and add your project to the list to protected projects.

VPC Service Controls is a security feature to prevent data exfiltration by creating a service perimeter. It does not provide the network connectivity needed for an internal-only VM to reach the service in the first place.

D. Deploy a Cloud NAT instance and route the traffic to the dedicated IP address of the Cloud Storage bucket.

Cloud NAT is primarily used to provide instances without external IPs with outbound access to the public internet. This violates the stated security policy. Private Google Access is the specific feature for accessing Google APIs privately.

References

1. Google Cloud Documentation, "Private Google Access overview": "With Private Google Access, VMs that only have internal IP addresses (no external IP addresses) can reach the external IP addresses of Google APIs and services. [...] You can use Private Google Access to access the external IP addresses of most Google APIs and services, including Cloud Storage..." This directly supports option C.

2. Google Cloud Documentation, "Choose a Cloud NAT product": "Cloud NAT enables Google Cloud virtual machine (VM) instances without external IP addresses and GKE clusters to connect to the internet." This confirms that Cloud NAT is for internet access, making option D incorrect as it violates the policy.

3. Google Cloud Documentation, "Private Service Access": "Private service access is a private connection between your VPC network and a network in a Google or third-party service. [...] For example, you can use private service access to connect to Cloud SQL..." This shows that option A is for a different type of service connection.

4. Google Cloud Documentation, "VPC Service Controls overview": "VPC Service Controls helps you mitigate the risk of data exfiltration from your Google-managed services..." This confirms that VPC Service Controls (option B) is a security measure, not a connectivity solution for this scenario.

Question 2

Your company completed the acquisition of a startup and is now merging the IT systems of both companies. The startup had a production Google Cloud project in their organization. You need to move this project into your organization and ensure that the project is billed lo your organization. You want to accomplish this task with minimal effort. What should you do?
Options
A: Use the projects. move method to move the project to your organization. Update the billing account of the project to that of your organization.
B: Ensure that you have an Organization Administrator Identity and Access Management (IAM) role assigned to you in both organizations. Navigate to the Resource Manager in the startup's Google Cloud organization, and drag the project to your company's organization.
C: Create a Private Catalog tor the Google Cloud Marketplace, and upload the resources of the startupโ€™s production project to the Catalog. Share the Catalog with your organization, and deploy the resources in your companyโ€™s project.
D: Create an infrastructure-as-code template tor all resources in the project by using Terraform. and deploy that template to a new project in your organization. Delete the protect from the startup's Google Cloud organization.
Show Answer
Correct Answer:
Use the projects. move method to move the project to your organization. Update the billing account of the project to that of your organization.
Explanation
The most direct and efficient method to transfer a project between organizations is using the Resource Manager's project move functionality. This is accomplished via the projects.move API method, which is also used by the gcloud projects move command. This process moves the project and all its resources, policies, and configurations intact. However, moving a project does not automatically change its associated billing account. To fulfill the requirement that the project is billed to the new organization, you must perform a second, distinct step: updating the project's billing account to one associated with the new organization. This two-step process is the standard procedure and represents the minimal effort required.
Why Incorrect Options are Wrong

B: This option is incomplete. While having the correct permissions (like Organization Administrator) is necessary for the move, it omits the explicit and critical step of changing the project's billing account.

C: This is an overly complex and incorrect approach. Private Catalog is for managing and deploying approved solutions, not for migrating existing production projects. This method would require recreating all resources, leading to significant effort and downtime.

D: This method involves recreating the entire project's infrastructure from code, which is the opposite of "minimal effort." It doesn't move the existing project but rather creates a new one, requiring a separate, complex data migration strategy and causing service disruption.

---

References

1. Official Google Cloud Documentation - Moving a project:

"When you move a project, its original billing account will continue to be used... To change the billing account, you must have the billing.projectManager role on the destination billing account and the resourcemanager.projectBillingManager role on the project." This confirms that moving the project and changing the billing account are two separate, required steps.

Source: Google Cloud Documentation, Resource Manager, "Moving a project", Section: "Effect on billing".

2. Official Google Cloud Documentation - gcloud projects move:

The command gcloud projects move is the command-line interface for the projects.move API method. The documentation outlines the process for moving a project to a new organization or folder.

Source: Google Cloud SDK Documentation, gcloud projects move.

3. Official Google Cloud Documentation - Modifying a project's billing account:

"You can change the billing account that is used to pay for a project." This page details the permissions and steps required to link a project to a different billing account, confirming it is a distinct action from moving the project's resource hierarchy.

Source: Google Cloud Billing Documentation, "Enable, disable, or change billing for a project".

Question 3

All development (dev) teams in your organization are located in the United States. Each dev team has its own Google Cloud project. You want to restrict access so that each dev team can only create cloud resources in the United States (US). What should you do?
Options
A: Create a folder to contain all the dev projects Create an organization policy to limit resources in US locations.
B: Create an organization to contain all the dev projects. Create an Identity and Access Management (IAM) policy to limit the resources in US regions.
C: Create an Identity and Access Management <IAM) policy to restrict the resources locations in the US. Apply the policy to all dev projects.
D: Create an Identity and Access Management (IAM)policy to restrict the resources locations in all dev projects. Apply the policy to all dev roles.
Show Answer
Correct Answer:
Create a folder to contain all the dev projects Create an organization policy to limit resources in US locations.
Explanation
The Google Cloud Organization Policy Service is designed to provide centralized, programmatic control over an organization's cloud resources. To restrict the physical location of newly created resources, the gcp.resourceLocations constraint should be used. The most efficient and scalable method to apply this policy to a specific group of projects (like all development projects) is to group them into a folder. The organization policy is then applied to this folder, and all projects within it will inherit the constraint, ensuring resources are only created in the specified US locations.
Why Incorrect Options are Wrong

B. Identity and Access Management (IAM) policies grant permissions to principals (who can do what), but they do not enforce constraints on resource attributes like location (where).

C. This is incorrect for the same reason as B; IAM is the wrong tool for enforcing location-based restrictions. This is the specific purpose of the Organization Policy Service.

D. This is incorrect for two reasons: IAM does not control resource locations, and policies are bound to resources (like projects or folders), not to roles.

References

1. Google Cloud Documentation, "Restricting resource locations": "To restrict the locations where your organization's resources can be created, you can use a resource locations organization policy. The resource locations organization policy constraint is gcp.resourceLocations." This document explicitly states that the gcp.resourceLocations constraint is the correct tool.

2. Google Cloud Documentation, "Organization Policy Constraints", gcp.resourceLocations: "Defines the set of locations where location-based Google Cloud resources can be created... This constraint will be checked at resource creation time." This confirms the specific constraint and its function.

3. Google Cloud Documentation, "Resource hierarchy for access control": "The Google Cloud resource hierarchy allows you to group projects under folders, and folders and projects under the organization... Policies set at higher levels in the hierarchy are inherited by the resources below them." This supports using a folder to apply the policy to multiple projects efficiently.

4. Google Cloud Documentation, "Overview of IAM": "IAM lets you grant granular access to specific Google Cloud resources and helps prevent access to other resources... IAM lets you adopt the security principle of least privilege". This documentation clarifies that IAM's focus is on permissions, not resource configuration constraints like location.

Question 4

You are configuring Cloud DNS. You want !to create DNS records to point home.mydomain.com, mydomain.com. and www.mydomain.com to the IP address of your Google Cloud load balancer. What should you do?
Options
A: Create one CNAME record to point mydomain.com to the load balancer, and create two A records to point WWW and HOME lo mydomain.com respectively.
B: Create one CNAME record to point mydomain.com to the load balancer, and create two AAAA records to point WWW and HOME to mydomain.com respectively.
C: Create one A record to point mydomain.com to the load balancer, and create two CNAME records to point WWW and HOME to mydomain.com respectively.
D: Create one A record to point mydomain.com lo the load balancer, and create two NS records to point WWW and HOME to mydomain.com respectively.
Show Answer
Correct Answer:
Create one A record to point mydomain.com to the load balancer, and create two CNAME records to point WWW and HOME to mydomain.com respectively.
Explanation
The most appropriate configuration is to create an A record for the apex domain (mydomain.com) to point directly to the load balancer's IPv4 address. An A record is required for the apex domain because a CNAME record cannot be used at the zone apex (the root of a domain), as it must coexist with other records like SOA and NS. For the subdomains (www.mydomain.com and home.mydomain.com), CNAME records should be created to point to the apex domain (mydomain.com). This approach is efficient because if the load balancer's IP address changes, only the single A record for mydomain.com needs to be updated, and the subdomains will automatically resolve to the new IP.
Why Incorrect Options are Wrong

A. A CNAME record cannot be used for the zone apex (mydomain.com). Additionally, A records map hostnames to IP addresses, not to other hostnames.

B. A CNAME record cannot be used for the zone apex. AAAA records are for mapping to IPv6 addresses, not for aliasing one hostname to another.

D. NS (Name Server) records are used to delegate a DNS zone to a set of authoritative name servers, not to point a hostname to an IP address or another hostname.

References

1. Google Cloud Documentation, Cloud DNS, "Add, modify, and delete records": Under the section for "CNAME record," the documentation states, "A CNAME record cannot exist at the zone apex." This directly invalidates options A and B. The documentation also defines an "A record" as mapping a domain name to an IPv4 address, which is the correct use case for the apex domain in this scenario.

2. Google Cloud Documentation, Cloud DNS, "Supported DNS record types": This page details the function of each record type. It confirms that A records are for IPv4 addresses, CNAMEs are for canonical names (aliases), and NS records are for name server delegation, supporting the reasoning for selecting C and rejecting D.

3. Internet Engineering Task Force (IETF), RFC 1034, "DOMAIN NAMES - CONCEPTS AND FACILITIES", Section 3.6.2: This foundational document for DNS specifies the CNAME rule: "If a CNAME RR is present at a node, no other data should be present". Since the zone apex must have SOA and NS records, a CNAME cannot be placed there. This provides the technical basis for why options A and B are incorrect.

4. Internet Engineering Task Force (IETF), RFC 1912, "Common DNS Operational and Configuration Errors", Section 2.4: This document clarifies common mistakes and states, "A CNAME record is not allowed to coexist with any other data. In other words, if suzy.podunk.xx is an alias for sue.podunk.xx, you can't also have an MX record for suzy.podunk.xx." This reinforces the rule against using a CNAME at the zone apex.

Question 5

You have two subnets (subnet-a and subnet-b) in the default VPC. Your database servers are running in subnet- a. Your application servers and web servers are running in subnet-b. You want to configure a firewall rule that only allows database traffic from the application servers to the database servers. What should you do?
Options
A: * Create service accounts sa-app and sa-db. โ€ข Associate service account: sa-app with the application servers and the service account sa-db with the database servers. โ€ข Create an ingress firewall rule to allow network traffic from source service account sa-app to target service account sa-db.
B: โ€ข Create network tags app-server and db-server. โ€ข Add the app-server lag lo the application servers and the db-server lag to the database servers. โ€ข Create an egress firewall rule to allow network traffic from source network tag app-server to target network tag db-server.
C: * Create a service account sa-app and a network tag db-server. * Associate the service account sa-app with the application servers and the network tag db-server with the database servers. โ€ข Create an ingress firewall rule to allow network traffic from source VPC IP addresses and target the subnet-a IP addresses.
D: โ€ข Create a network lag app-server and service account sa-db. โ€ข Add the tag to the application servers and associate the service account with the database servers. โ€ข Create an egress firewall rule to allow network traffic from source network tag app-server to target service account sa-db.
Show Answer
Correct Answer:
* Create service accounts sa-app and sa-db. โ€ข Associate service account: sa-app with the application servers and the service account sa-db with the database servers. โ€ข Create an ingress firewall rule to allow network traffic from source service account sa-app to target service account sa-db.
Explanation
This solution correctly implements the principle of least privilege by using identity-based controls. An ingress firewall rule is created to control traffic entering the database servers. By specifying a target service account (sa-db) for the database servers and a source service account (sa-app) for the application servers, the rule precisely allows traffic only from instances with the sa-app identity to instances with the sa-db identity. This is a secure and scalable method that is not dependent on network location or IP addresses, which can change.
Why Incorrect Options are Wrong

B: This option incorrectly describes an egress rule. The destination for an egress rule must be an IP CIDR range, not a target network tag. A target tag is used for ingress rules.

C: The firewall rule described is overly permissive. It allows traffic from all VPC IP addresses to the entire database subnet, which violates the requirement to only allow traffic from the application servers.

D: This option incorrectly describes an egress rule. The destination for an egress rule must be an IP CIDR range, not a target service account. A target service account is used for ingress rules.

References

1. Google Cloud Documentation - VPC firewall rules: "For ingress rules, you can use service accounts to define the source. For egress rules, you can use service accounts to define the destination... Using service accounts for the source of ingress rules and the destination of egress rules is more specific than using network tags." This supports using service accounts for source/target specification.

2. Google Cloud Documentation - Use firewall rules: Under the "Components of a firewall rule" section, the table for "Source for ingress rules" lists "Source service accounts". The table for "Destination for egress rules" lists only "Destination IPv4 or IPv6 ranges". This confirms that options B and D, which specify a target tag or service account for an egress rule's destination, are invalid configurations.

3. Google Cloud Documentation - Firewall rules overview: "You can configure firewall rules by using network tags or service accounts... If you need stricter control over rules, we recommend that you use service accounts instead of network tags." This highlights that the approach in option A is a recommended best practice for secure configurations.

Question 6

Your learn wants to deploy a specific content management system (CMS) solution lo Google Cloud. You need a quick and easy way to deploy and install the solution. What should you do?
Options
A: Search for the CMS solution in Google Cloud Marketplace. Use gcloud CLI to deploy the solution.
B: Search for the CMS solution in Google Cloud Marketplace. Deploy the solution directly from Cloud Marketplace.
C: Search for the CMS solution in Google Cloud Marketplace. Use Terraform and the Cloud Marketplace ID to deploy the solution with the appropriate parameters.
D: Use the installation guide of the CMS provider. Perform the installation through your configuration management system.
Show Answer
Correct Answer:
Search for the CMS solution in Google Cloud Marketplace. Deploy the solution directly from Cloud Marketplace.
Explanation
Google Cloud Marketplace is specifically designed to offer pre-configured and optimized software solutions that can be deployed rapidly on Google Cloud. For a common application like a Content Management System (CMS), the Marketplace provides a streamlined, web-based interface that automates the creation of all necessary resources (e.g., Compute Engine instances, disks, firewall rules) and the installation of the software. This "few-click" deployment process is the most direct, quickest, and easiest method, perfectly aligning with the user's requirements.
Why Incorrect Options are Wrong

A. While the gcloud CLI can deploy Marketplace solutions, it is generally more complex and less intuitive than using the graphical user interface, contradicting the "quick and easy" requirement.

C. Using Terraform is an excellent practice for infrastructure-as-code and repeatable deployments, but it requires writing configuration files and is more involved than a direct deployment from the Marketplace UI.

D. A manual installation using the provider's guide is the most time-consuming and complex option. It requires manually provisioning infrastructure and handling all software dependencies and configurations.

References

1. Google Cloud Documentation, "Overview of Google Cloud Marketplace": The documentation states, "Google Cloud Marketplace lets you quickly deploy functional software packages that run on Google Cloud... Some solutions are free to use, and for others, you pay for the software, or for the Google Cloud resources that you use, or both." This confirms the Marketplace is the intended tool for quick deployments.

2. Google Cloud Documentation, "Deploying a VM-based solution": This guide details the process of deploying a solution directly from the Marketplace console. The steps involve selecting a product and filling out a simple web form, after which "Cloud Deployment Manager deploys the solution for you." This demonstrates the ease of use compared to CLI or manual methods.

3. Google Cloud Documentation, "Deploying a solution by using Terraform": This document outlines the multi-step process for using Terraform, which includes creating a .tf configuration file. This confirms that while possible, it is not the simplest or quickest method for a one-time deployment.

Question 7

You are working for a startup that was officially registered as a business 6 months ago. As your customer base grows, your use of Google Cloud increases. You want to allow all engineers to create new projects without asking them for their credit card information. What should you do?
Options
A: Create a Billing account, associate a payment method with it, and provide all project creators with permission to associate that billing account with their projects.
B: Grant all engineerโ€™s permission to create their own billing accounts for each new project.
C: Apply for monthly invoiced billing, and have a single invoice tor the project paid by the finance team.
D: Create a billing account, associate it with a monthly purchase order (PO), and send the PO to Google Cloud.
Show Answer
Correct Answer:
Create a Billing account, associate a payment method with it, and provide all project creators with permission to associate that billing account with their projects.
Explanation
The most effective and standard practice is to centralize billing management. This is achieved by creating a single Cloud Billing account for the organization and associating a corporate payment method. To allow engineers to create projects that are paid for by the company, they must be granted an IAM role on that billing account which includes the billing.projects.link permission. The Billing Account User role (roles/billing.user) is a predefined role that grants this permission. This setup enables engineers to create new projects and link them to the central billing account, solving the problem without requiring them to use personal credit cards and providing the company with centralized financial oversight.
Why Incorrect Options are Wrong

B. This approach decentralizes billing, creating significant administrative overhead and making cost tracking nearly impossible. It also contradicts the goal of not using engineers' personal payment information.

C. Invoiced billing is a payment option, not the mechanism that enables project creation. A startup may not meet the eligibility criteria, and this option omits the crucial step of granting permissions.

D. A Purchase Order (PO) is a financial instrument used with invoiced billing for tracking purposes. It does not solve the core problem of granting engineers permission to use a central billing account.

References

1. Google Cloud Documentation, "Overview of Cloud Billing concepts": This document states, "A Cloud Billing account is set up in Google Cloud and is used to pay for usage costs in your Google Cloud projects... To use Google Cloud resources in a project, billing must be enabled on the project. Billing is enabled when the project is linked to an active Cloud Billing account." This supports the fundamental need for a central billing account linked to projects.

2. Google Cloud Documentation, "Control access to Cloud Billing accounts with IAM," Section: "Billing account permissions": This page details the permissions required to manage billing. Specifically, the billing.projects.link permission "allows a user to link projects to the billing account." This is the exact permission needed by the engineers in the scenario.

3. Google Cloud Documentation, "Understand predefined Cloud Billing IAM roles," Section: "Billing Account User": The roles/billing.user role is described as granting permissions to link projects to a billing account. This is the standard role assigned to users who need to create projects under a corporate billing account.

4. Google Cloud Documentation, "Request invoiced billing," Section: "Eligibility requirements": This document outlines the criteria for invoiced billing, which includes being a registered business for at least one year and having a minimum spend, confirming that a 6-month-old startup might not qualify.

Question 8

You recently received a new Google Cloud project with an attached billing account where you will work. You need to create instances, set firewalls, and store data in Cloud Storage. You want to follow Google-recommended practices. What should you do?
Options
A: Use the gcloud CLI services enable cloudresourcemanager.googleapis.com command to enable all resources.
B: Use the gcloud services enable compute.googleapis.com command to enable Compute Engine and the gcloud services enable storage-api.googleapis.com command to enable the Cloud Storage APIs.
C: Open the Google Cloud console and enable all Google Cloud APIs from the API dashboard.
D: Open the Google Cloud console and run gcloud init --project in a Cloud Shell.
Show Answer
Correct Answer:
Use the gcloud services enable compute.googleapis.com command to enable Compute Engine and the gcloud services enable storage-api.googleapis.com command to enable the Cloud Storage APIs.
Explanation
The principle of least privilege is a Google-recommended best practice, which dictates that you should only enable the specific APIs required for your tasks. The question requires creating instances and firewalls (managed by the Compute Engine API) and storing data (managed by the Cloud Storage API). Option B correctly identifies the specific gcloud commands to enable only these two necessary services: compute.googleapis.com for Compute Engine and storage-api.googleapis.com (or storage.googleapis.com) for Cloud Storage. This approach ensures that no unnecessary services are enabled, which minimizes the security attack surface and potential for unintended usage.
Why Incorrect Options are Wrong

A. The cloudresourcemanager.googleapis.com API is for programmatically managing projects, folders, and organizations. It does not enable other services like Compute Engine or Cloud Storage.

C. Enabling all APIs is a significant security risk as it violates the principle of least privilege. It exposes the project to services that are not needed, increasing the potential attack surface.

D. The gcloud init command is used to initialize or configure settings for the gcloud command-line tool, such as the default project, account, and region. It does not enable any APIs.

References

1. Official Google Cloud Documentation, Enabling and disabling services: "Before you can use a Google Cloud service, you must first enable the service's API for your Google Cloud project... We recommend that you enable APIs for only the services that your apps actually use." This supports the principle of enabling specific APIs. The page also provides the syntax gcloud services enable SERVICENAME, which matches option B.

Source: Google Cloud Documentation, "Enabling and disabling services".

2. Official Google Cloud Documentation, gcloud services enable command reference: This document confirms that gcloud services enable [SERVICE]... is the correct command to enable one or more APIs for a project.

Source: Google Cloud SDK Documentation, gcloud services enable.

3. Official Google Cloud Security Foundations Guide, Section 2.3, "Manage IAM permissions": This guide emphasizes the principle of least privilege. While discussing IAM, the principle extends to all resources, including enabling only necessary APIs. "Grant roles at the smallest scope... grant predefined roles instead of primitive roles... to enforce the principle of least privilege."

Source: Google Cloud Security Foundations Guide PDF, Page 13.

4. Official Google Cloud Documentation, gcloud init command reference: This document describes the function of gcloud init as: "Initializes or reinitializes gcloud CLI settings." It makes no mention of enabling APIs, confirming that option D is incorrect.

Source: Google Cloud SDK Documentation, gcloud init.

Question 9

Your company is using Google Workspace to manage employee accounts. Anticipated growth will increase the number of personnel from 100 employees to 1.000 employees within 2 years. Most employees will need access to your company's Google Cloud account. The systems and processes will need to support 10x growth without performance degradation, unnecessary complexity, or security issues. What should you do?
Options
A: Migrate the users to Active Directory. Connect the Human Resources system to Active Directory. Turn on Google Cloud Directory Sync (GCDS) for Cloud Identity. Turn on Identity Federation from Cloud Identity to Active Directory.
B: Organize the users in Cloud Identity into groups. Enforce multi-factor authentication in Cloud Identity.
C: Turn on identity federation between Cloud Identity and Google Workspace. Enforce multi-factor authentication for domain wide delegation.
D: Use a third-party identity provider service through federation. Synchronize the users from Google Workplace to the third-party provider in real time.
Show Answer
Correct Answer:
Organize the users in Cloud Identity into groups. Enforce multi-factor authentication in Cloud Identity.
Explanation
The company already uses Google Workspace, which means their user identities are managed by Cloud Identity. The most effective, scalable, and secure solution is to leverage this existing infrastructure. Organizing users into Google Groups and assigning IAM roles to these groups is a Google-recommended best practice for managing permissions at scale. As new employees join, they can be added to the appropriate groups to inherit necessary permissions, simplifying administration. Enforcing multi-factor authentication (MFA), known as 2-Step Verification in Google, is a critical security measure that protects user accounts from unauthorized access, which is essential as the organization grows. This approach avoids unnecessary complexity and cost while enhancing security and scalability.
Why Incorrect Options are Wrong

A. This introduces significant and unnecessary complexity by adding Active Directory. Migrating users and setting up synchronization and federation is a major project when a suitable identity provider is already in place.

C. This is technically incorrect. Google Workspace and Cloud Identity are part of an integrated identity platform; you do not federate between them. It also misapplies MFA to domain-wide delegation instead of user accounts.

D. Introducing a third-party identity provider adds complexity, cost, and another system to manage. Synchronizing from Google Workspace to a third-party provider is also an unconventional and illogical data flow.

References

1. Using Groups for Access Control: Google Cloud's official documentation on Identity and Access Management (IAM) explicitly recommends using groups to manage roles for multiple users. This simplifies administration and scales effectively.

Source: Google Cloud Documentation, "Best practices for using IAM", Section: "Use groups and roles to manage access".

2. Google Workspace and Cloud Identity Integration: Google Workspace accounts are inherently Cloud Identity accounts. This means there is no need for federation or a separate identity system.

Source: Google Cloud Documentation, "Overview of Cloud Identity and Access Management", Section: "Identities".

3. Enforcing Multi-Factor Authentication (MFA): The Google Workspace Admin help center details how to enforce 2-Step Verification (Google's term for MFA) for all users in an organization to enhance security.

Source: Google Workspace Admin Help, "Protect your business with 2-Step Verification", Section: "Deploy 2-Step Verification".

4. Complexity of Federation: Setting up federation with an external identity provider (as suggested in A and D) is a multi-step process intended for organizations that already have an established external IdP as their source of truth, not for those already using Google Workspace.

Source: Google Cloud Documentation, "Setting up identity federation", provides an overview of the required configuration, highlighting the added complexity.

Question 10

Your application development team has created Docker images for an application that will be deployed on Google Cloud. Your team does not want to manage the infrastructure associated with this application. You need to ensure that the application can scale automatically as it gains popularity. What should you do?
Options
A: Create an Instance template with the container image, and deploy a Managed Instance Group with Autoscaling.
B: Upload Docker images to Artifact Registry, and deploy the application on Google Kubernetes Engine using Standard mode.
C: Upload Docker images to the Cloud Storage, and deploy the application on Google Kubernetes Engine using Standard mode.
D: Upload Docker images to Artifact Registry, and deploy the application on Cloud Run.
Show Answer
Correct Answer:
Upload Docker images to Artifact Registry, and deploy the application on Cloud Run.
Explanation
Cloud Run is a fully managed, serverless platform that allows you to run stateless containers without managing the underlying infrastructure. It automatically scales the number of container instances up or down based on traffic, including scaling down to zero when there are no requests, which directly meets the requirements. Artifact Registry is the recommended, fully managed service in Google Cloud for storing, managing, and securing container images. This combination provides a completely serverless solution that requires no infrastructure management from the development team while ensuring automatic scalability.
Why Incorrect Options are Wrong

A. A Managed Instance Group runs on Compute Engine virtual machines. This requires managing the underlying OS and instance configurations, which violates the "do not want to manage the infrastructure" requirement.

B. Google Kubernetes Engine (GKE) in Standard mode requires you to manage the worker node pools (the underlying VMs). This includes tasks like node upgrades and capacity planning, which constitutes infrastructure management.

C. Cloud Storage is designed for object storage, not as a primary repository for Docker images. Furthermore, GKE in Standard mode requires infrastructure management, making this option incorrect on two counts.

References

1. Google Cloud Documentation, "Cloud Run overview": "Cloud Run is a managed compute platform that lets you run containers directly on top of Google's scalable infrastructure. You can deploy code written in any programming language on Cloud Run if you can build a container image from it. ... With Cloud Run, you don't need to manage infrastructure..."

2. Google Cloud Documentation, "Choosing a compute option": This document compares various compute services. It categorizes Cloud Run as "Serverless" and highlights "No infrastructure management." In contrast, it places Compute Engine (used in option A) under "Infrastructure as a Service (IaaS)" and GKE (used in options B and C) under "Containers as a Service (CaaS)," both of which involve more infrastructure management than serverless options.

3. Google Cloud Documentation, "Artifact Registry overview": "Artifact Registry is a single place for your organization to manage container images and language packages (such as Maven and npm). It is fully integrated with Google Cloud's tooling and runtimes..." This confirms Artifact Registry as the correct repository for Docker images.

4. Google Cloud Documentation, "Comparing GKE cluster modes: Autopilot and Standard": "In Standard mode, you manage your cluster's underlying infrastructure, which gives you node configuration flexibility." This statement confirms that GKE Standard mode involves infrastructure management, which the question explicitly seeks to avoid.

Question 11

Your team is using Linux instances on Google Cloud. You need to ensure that your team logs in to these instances in the most secure and cost efficient way. What should you do?
Options
A: Attach a public IP to the instances and allow incoming connections from the internet on port 22 for SSH.
B: Use a third party tool to provide remote access to the instances.
C: Use the gcloud compute ssh command with the --tunnel-through-iap flag. Allow ingress traffic from the IP range 35.235.240.0/20 on port 22.
D: Create a bastion host with public internet access. Create the SSH tunnel to the instance through the bastion host.
Show Answer
Correct Answer:
Use the gcloud compute ssh command with the --tunnel-through-iap flag. Allow ingress traffic from the IP range 35.235.240.0/20 on port 22.
Explanation
Using Identity-Aware Proxy (IAP) for TCP forwarding is Google Cloud's recommended method for securing administrative access like SSH. This approach eliminates the need for bastion hosts or public IP addresses on your instances, significantly reducing the external attack surface. Access is controlled through granular IAM permissions (roles/iap.tunnelResourceAccessor) rather than network-level firewall rules open to the internet. The gcloud compute ssh --tunnel-through-iap command automates this secure connection. This is also the most cost-efficient solution as IAP for TCP forwarding is a zero-cost service, unlike a bastion host which requires a continuously running, billable VM.
Why Incorrect Options are Wrong

A. Attaching a public IP and opening port 22 to the internet is highly insecure. It exposes the instance to constant brute-force attacks and automated scans from malicious actors.

B. Using a third-party tool introduces additional costs, potential security vulnerabilities, and management overhead. It is less integrated and secure than a native Google Cloud solution like IAP.

D. A bastion host is a valid security pattern but is not the most cost-efficient, as it requires running and maintaining an additional VM. It also has more operational overhead than the fully managed IAP service.

References

1. Google Cloud Documentation - Use IAP for TCP forwarding: "Using IAP's TCP forwarding feature... you can control who can access administrative services like SSH and RDP on your backends from the public internet. This removes the need to run a bastion host..." This document also specifies the required firewall rule: "Create a firewall rule that allows ingress traffic from IAP's TCP forwarding IP range, 35.235.240.0/20, to the ports on your instances."

Source: Google Cloud Documentation, "Use IAP for TCP forwarding", Section: "When to use IAP TCP forwarding" and "Create a firewall rule".

2. Google Cloud Documentation - Securely connecting to VM instances: This guide compares connection methods and highlights the benefits of IAP. "IAP lets you establish a central authorization layer for applications that are accessed by HTTPS, so you can use an application-level access control model instead of relying on network-level firewalls." It positions IAP as a superior alternative to bastion hosts for securing access.

Source: Google Cloud Architecture Center, "Securely connecting to VM instances", Section: "Identity-Aware Proxy".

3. Google Cloud Documentation - Choose a connection option: This document explicitly contrasts IAP with bastion hosts. For IAP, it states you can connect "without requiring your VM instances to have external IP addresses." For bastion hosts, it notes the requirement to "provision and maintain an additional instance," which implies both cost and management overhead.

Source: Google Cloud Documentation, Compute Engine, "Choose a connection option".

Question 12

You are migrating a business critical application from your local data center into Google Cloud. As part of your high-availability strategy, you want to ensure that any data used by the application will be immediately available if a zonal failure occurs. What should you do?
Options
A: Store the application data on a zonal persistent disk. Create a snapshot schedule for the disk. If an outage occurs, create a new disk from the most recent snapshot and attach it to a new VM in another zone.
B: Store the application data on a zonal persistent disk. If an outage occurs, create an instance in another zone with this disk attached.
C: Store the application data on a regional persistent disk. Create a snapshot schedule for the disk. If an outage occurs, create a new disk from the most recent snapshot and attach it to a new VM in another zone.
D: Store the application data on a regional persistent disk If an outage occurs, create an instance in another zone with this disk attached.
Show Answer
Correct Answer:
Store the application data on a regional persistent disk If an outage occurs, create an instance in another zone with this disk attached.
Explanation
Regional persistent disks are designed for high availability by synchronously replicating data between two zones within the same region. If the primary zone containing the virtual machine (VM) fails, the application can be failed over to the other zone. You can force-attach the regional persistent disk to a new VM instance in the secondary zone. This process ensures that the data is immediately available with minimal downtime, which is critical for a business-critical application as specified in the scenario.
Why Incorrect Options are Wrong

A. Zonal persistent disks are confined to a single zone. If that zone fails, the disk becomes inaccessible. Restoring from a snapshot introduces latency and potential data loss (RPO), failing the "immediately available" requirement.

B. A zonal persistent disk cannot be attached to a VM in a different zone. This action is technically impossible, making it an invalid solution for a zonal failure.

C. While using the correct disk type (regional), the recovery process is flawed. Restoring from a snapshot is unnecessary and slow; the primary benefit of a regional disk is its immediate availability in the failover zone.

References

1. Google Cloud Documentation, "High availability with regional persistent disks": "Regional persistent disks provide synchronous replication of data between two zones in a region. ... If your primary zone becomes unavailable, you can fail over to your secondary zone. In the secondary zone, you can force-attach the regional persistent disk to a new VM instance." (See section: "Failing over your regional persistent disk").

2. Google Cloud Documentation, "About persistent disk snapshots": "Snapshots are global resources... Use snapshots to protect your data from unexpected failures... Creating a new persistent disk from a snapshot takes time..." This highlights that snapshots are for disaster recovery/backups, not instantaneous high-availability failover.

3. Google Cloud Documentation, "Persistent disk types": "Zonal persistent disks are located in a single zone. If a zone becomes unavailable, all zonal persistent disks in that zone are unavailable until the zone is restored." This confirms that options A and B are unsuitable for zonal failure scenarios.

Question 13

The DevOps group in your organization needs full control of Compute Engine resources in your development project. However, they should not have permission to create or update any other resources in the project. You want to follow Google's recommendations for setting permissions for the DevOps group. What should you do?
Options
A: Grant the basic role roles/viewer and the predefined role roles/compute.admin to the DevOps group.
B: Create an IAM policy and grant all compute. instanceAdmln." permissions to the policy Attach the policy to the DevOps group.
C: Create a custom role at the folder level and grant all compute. instanceAdmln. * permissions to the role Grant the custom role to the DevOps group.
D: Grant the basic role roles/editor to the DevOps group.
Show Answer
Correct Answer:
Grant the basic role roles/viewer and the predefined role roles/compute.admin to the DevOps group.
Explanation
This approach adheres to the principle of least privilege, a core Google recommendation. The predefined role roles/compute.admin grants the DevOps group full control over all Compute Engine resources, fulfilling their primary requirement. The basic role roles/viewer provides necessary read-only access to other resources in the project, which is often needed for context (e.g., viewing network configurations or storage buckets), without granting permissions to modify them. This combination precisely meets the scenario's constraints by isolating modification permissions strictly to Compute Engine while allowing project-wide visibility.
Why Incorrect Options are Wrong

B. This option misrepresents how IAM works. You do not grant permissions to a policy; a policy binds members to roles. The permission string is also invalid.

C. A predefined role (roles/compute.admin) already exists for this purpose, which is preferred over creating a custom role. Applying it at the folder level is also incorrect scope.

D. The roles/editor basic role grants broad permissions to create and update all resources in the project, directly violating the requirement to restrict modification permissions to Compute Engine.

References

1. Google Cloud Documentation - IAM Basic and predefined roles reference: This document explicitly states that the Editor role (roles/editor) grants "permissions to create, modify, and delete all resources." It also describes the Compute Admin role (roles/compute.admin) as providing "Full control of all Compute Engine resources." This supports why option D is wrong and option A is correct. (See "Basic roles" and "Compute Engine roles" sections).

2. Google Cloud Documentation - IAM Overview, Principle of least privilege: "When you grant roles, grant the least permissive role that's required. For example, if a user only needs to view resources in a project, grant them the Viewer role, not the Owner role." This principle supports choosing the specific roles/compute.admin over the broad roles/editor.

3. Google Cloud Documentation - Understanding roles: "Predefined roles are created and maintained by Google... Google automatically updates their permissions as new features and services are added to Google Cloud. When possible, we recommend that you use predefined roles instead of custom roles." This supports the choice of a predefined role (as in option A) over a custom one (as in option C).

Question 14

Your team is running an on-premises ecommerce application. The application contains a complex set of microservices written in Python, and each microservice is running on Docker containers. Configurations are injected by using environment variables. You need to deploy your current application to a serverless Google Cloud cloud solution. What should you do?
Options
A: Use your existing CI/CD pipeline Use the generated Docker images and deploy them to Cloud Run. Update the configurations and the required endpoints.
B: Use your existing continuous integration and delivery (CI/CD) pipeline. Use the generated Docker images and deploy them to Cloud Function. Use the same configuration as on-premises.
C: Use the existing codebase and deploy each service as a separate Cloud Function Update the configurations and the required endpoints.
D: Use your existing codebase and deploy each service as a separate Cloud Run Use the same configurations as on-premises.
Show Answer
Correct Answer:
Use your existing CI/CD pipeline Use the generated Docker images and deploy them to Cloud Run. Update the configurations and the required endpoints.
Explanation
The most effective strategy is to leverage the existing containerized architecture. Cloud Run is Google Cloud's serverless platform designed specifically for running stateless containers. Since the application's microservices are already packaged as Docker images, they can be deployed directly to Cloud Run with minimal modification. This approach utilizes the existing CI/CD pipeline, simply retargeting the deployment step to Cloud Run. It correctly acknowledges that configurations, such as database connection strings and service endpoints managed via environment variables, will need to be updated for the new cloud environment, which is a standard and necessary part of any migration.
Why Incorrect Options are Wrong

B: Cloud Functions is less suitable for running complex, containerized web services than Cloud Run. More importantly, it is unrealistic to assume the same on-premises configurations will work in the cloud without any changes.

C: This approach discards the significant advantage of having pre-built Docker containers. Refactoring the codebase for Cloud Functions would require unnecessary development effort compared to deploying the existing containers to Cloud Run.

D: While selecting Cloud Run is correct, this option is flawed because it incorrectly states that the same on-premises configurations can be used. Migrating from on-premises to the cloud always requires configuration updates.

References

1. Google Cloud Documentation, "What is Cloud Run?": "Cloud Run is a managed compute platform that lets you run containers directly on top of Google's scalable infrastructure. You can deploy code written in any programming language if you can build a container image from it. In fact, building container images is optional. If you're using Go, Node.js, Python, Java, .NET Core, or Ruby, you can use the source-based deployment option that builds the container for you." This confirms Cloud Run is the ideal platform for existing container images.

2. Google Cloud Documentation, "Choosing a serverless option": This document compares serverless options. It states, "Cloud Run is also a good choice if you want to migrate a containerized application from on-premises or from another cloud." This directly supports the scenario in the question. For Cloud Functions, it is positioned for "event-driven" applications, which is a less direct fit for a set of web microservices than Cloud Run.

3. Google Cloud Documentation, "Deploying container images": When deploying to Cloud Run, the documentation for the gcloud run deploy command shows flags like --set-env-vars for setting environment variables. This confirms that configurations are expected to be set or updated during deployment to the new cloud environment, invalidating the claims in options B and D that on-premises configurations can be used as-is.

4. Google Cloud Documentation, "Cloud Run common use cases": The documentation lists "APIs and microservices" as a primary use case, stating, "Quickly deploy microservices and scale them as needed without having to manage a Kubernetes cluster." This reinforces Cloud Run as the correct choice for a microservices-based application.

Question 15

You are running a web application on Cloud Run for a few hundred users. Some of your users complain that the initial web page of the application takes much longer to load than the following pages. You want to follow Google's recommendations to mitigate the issue. What should you do?
Options
A: Update your web application to use the protocol HTTP/2 instead of HTTP/1.1
B: Set the concurrency number to 1 for your Cloud Run service.
C: Set the maximum number of instances for your Cloud Run service to 100.
D: Set the minimum number of instances for your Cloud Run service to 3.
Show Answer
Correct Answer:
Set the minimum number of instances for your Cloud Run service to 3.
Explanation
The described issue, where an initial request is slow but subsequent ones are fast, is a classic symptom of a "cold start." In Cloud Run, services scale to zero instances by default during periods of no traffic. When a new request arrives, a new container instance must be provisioned, which introduces latency. To mitigate this, Google recommends setting a minimum number of instances. This ensures that a specified number of container instances are always running and "warm," ready to serve requests immediately. This directly eliminates the startup delay for incoming requests and resolves the slow initial page load problem.
Why Incorrect Options are Wrong

A. Using HTTP/2 optimizes network transport for loading multiple assets but does not solve the server-side latency caused by provisioning a new container instance.

B. Setting concurrency to 1 would likely increase the frequency of cold starts, as more instances would be needed to handle the same amount of traffic.

C. Setting a maximum number of instances is a cost-control and safety feature to limit scaling; it does not keep instances warm to prevent cold starts.

References

1. Google Cloud Documentation, "Configuring minimum instances": "To reduce latency and cold starts, you can configure Cloud Run to keep a minimum number of container instances running and ready to serve requests... When a request comes in that needs to be served by a new instance, Cloud Run starts a new instance. This is known as a cold start. Cold starts can result in high latency for the requests that require them."

2. Google Cloud Documentation, "General development tips for Cloud Run", Section: "Minimizing cold starts": "To permanently keep instances warm, use the min-instances setting. If you set the value of min-instances to 1 or greater, a specified number of container instances are kept running and ready to serve requests, which reduces cold starts."

3. Google Cloud Documentation, "Container instance concurrency": "By default, each Cloud Run container instance can receive up to 80 requests at the same time... You may want to limit concurrency for instances where... your code cannot handle parallel requests." (This shows concurrency is not primarily for reducing initial latency).

Question 16

You want to permanently delete a Pub/Sub topic managed by Config Connector in your Google Cloud project. What should you do?
Options
A: Use kubect1 to delete the topic resource.
B: Use gcloud CLI to delete the topic.
C: Use kubect1 to create the label deleted-by-cnrm and to change its value to true for the topic resource.
D: Use gcloud CLI to update the topic label managed-by-cnrm to false.
Show Answer
Correct Answer:
Use kubect1 to delete the topic resource.
Explanation
Config Connector manages Google Cloud resources declaratively using the Kubernetes Resource Model. The state of the Google Cloud resource is intended to mirror the state of the corresponding Kubernetes resource manifest. To permanently delete a Google Cloud resource managed by Config Connector, you must delete its corresponding Kubernetes resource object. The standard and correct method for this is using the kubectl delete command. By default, this action signals the Config Connector controller to delete the underlying Google Cloud resource, in this case, the Pub/Sub topic. This approach maintains the declarative management model and ensures the change is permanent.
Why Incorrect Options are Wrong

B. Using gcloud CLI to delete the topic is an imperative action that bypasses Config Connector. The controller will detect the drift and recreate the Pub/Sub topic to match the existing Kubernetes manifest.

C. The label deleted-by-cnrm is not a recognized mechanism within Config Connector for triggering resource deletion. Deletion is managed by removing the Kubernetes resource object itself, not by applying a specific label.

D. The managed-by-cnrm label is used by Config Connector to identify resources under its management. Changing this label via gcloud will be reverted by the controller and does not trigger resource deletion.

References

1. Google Cloud Documentation - Deleting resources with Config Connector: "To delete a resource, delete the resource from your cluster. Config Connector deletes the Google Cloud resource. You can delete a resource with kubectl delete."

Source: Google Cloud Documentation, "Managing your resources with Config Connector", Section: "Deleting resources".

URL: https://cloud.google.com/config-connector/docs/how-to/managing-resources#deletingresources

2. Google Cloud Documentation - Preventing resource deletion: "By default, when you delete a Config Connector resource, Config Connector deletes the corresponding Google Cloud resource. This behavior can be changed with an annotation." This confirms that kubectl delete is the trigger for deletion by default.

Source: Google Cloud Documentation, "Managing your resources with Config Connector", Section: "Preventing resource deletion".

URL: https://cloud.google.com/config-connector/docs/how-to/managing-resources#preventingresourcedeletion

3. Google Cloud Documentation - Config Connector Overview: "Config Connector is a Kubernetes addon that allows you to manage Google Cloud resources through Kubernetes configuration... Config Connector's controller reconciles your cluster's state with Google Cloud, creating, updating, or deleting Google Cloud resources as you apply configurations to your cluster." This explains the reconciliation behavior that makes option B incorrect.

Source: Google Cloud Documentation, "Config Connector overview".

URL: https://cloud.google.com/config-connector/docs/overview

Question 17

You want to set up a Google Kubernetes Engine cluster Verifiable node identity and integrity are required for the cluster, and nodes cannot be accessed from the internet. You want to reduce the operational cost of managing your cluster, and you want to follow Google-recommended practices. What should you do?
Options
A: Deploy a private autopilot cluster
B: Deploy a public autopilot cluster.
C: Deploy a standard public cluster and enable shielded nodes.
D: Deploy a standard private cluster and enable shielded nodes.
Show Answer
Correct Answer:
Deploy a private autopilot cluster
Explanation
GKE Autopilot mode is designed to reduce operational overhead by managing the cluster's underlying infrastructure, including nodes and scaling. Autopilot clusters enable Shielded GKE Nodes by default, which provides the required verifiable node identity and integrity through features like Secure Boot and integrity monitoring. Choosing a private cluster ensures that nodes are provisioned with only internal IP addresses, isolating them from the internet. This configuration meets all the stated requirements for security, networking, and reduced operational cost, aligning with Google-recommended practices for a managed and secure environment.
Why Incorrect Options are Wrong

B. Deploy a public autopilot cluster.

A public cluster would assign external IP addresses to nodes, violating the requirement that nodes cannot be accessed from the internet.

C. Deploy a standard public cluster and enable shielded nodes.

This option is incorrect on two counts: a public cluster violates the network access requirement, and a standard cluster increases operational cost compared to Autopilot.

D. Deploy a standard private cluster and enable shielded nodes.

While this meets the security and networking requirements, a standard cluster requires manual node pool management, which does not reduce operational cost as effectively as Autopilot.

References

1. Google Cloud Documentation: GKE Autopilot overview: "Autopilot is a mode of operation in Google Kubernetes Engine (GKE) in which Google manages your cluster configuration... Autopilot clusters enable many GKE security features and best practices by default... Autopilot clusters always use Shielded GKE Nodes." This supports the reduced operational cost and verifiable integrity requirements.

2. Google Cloud Documentation: About private clusters: "In a private cluster, nodes only have internal IP addresses, which means that the nodes and Pods are isolated from the internet by default." This supports the requirement that nodes cannot be accessed from the internet.

3. Google Cloud Documentation: Shielded GKE Nodes: "Shielded GKE Nodes provide strong, verifiable node identity and integrity to increase the security of your Google Kubernetes Engine (GKE) nodes..." This directly addresses the "verifiable node identity and integrity" requirement.

Question 18

An external member of your team needs list access to compute images and disks in one of your projects. You want to follow Google-recommended practices when you grant the required permissions to this user. What should you do?
Options
A: Create a custom role, and add all the required compute.disks.list and compute, images.list permissions as includedPermissions. Grant the custom role to the user at the project level.
B: Create a custom role based on the Compute Image User role Add the compute.disks, list to the includedPermissions field Grant the custom role to the user at the project level
C: Grant the Compute Storage Admin role at the project level.
D: Create a custom role based on the Compute Storage Admin role. Exclude unnecessary permissions from the custom role. Grant the custom role to the user at the project level.
Show Answer
Correct Answer:
Create a custom role based on the Compute Image User role Add the compute.disks, list to the includedPermissions field Grant the custom role to the user at the project level
Explanation
The goal is to grant an external user list access for compute images and disks while adhering to the principle of least privilege, a Google-recommended best practice. Option B achieves this effectively. It starts with the predefined Compute Image User role, which provides the necessary read-only permissions for images (compute.images.list, compute.images.get, etc.) and essential ancillary permissions like compute.projects.get required for tools like gcloud and the Cloud Console to function correctly. By creating a custom role based on this and adding the single missing permission, compute.disks.list, you create a functional role that is narrowly tailored to the user's needs without granting excessive privileges.
Why Incorrect Options are Wrong

A. This role is too minimal. While it contains the exact list permissions, it lacks ancillary permissions (e.g., resourcemanager.projects.get) that are often required to get project context, making the role non-functional in many tools.

C. The Compute Storage Admin role is excessively permissive. It grants full administrative control, including creation and deletion of disks and images, which strongly violates the principle of least privilege.

D. Starting with a highly privileged role like Compute Storage Admin and removing permissions is not a recommended practice. It is complex, error-prone, and increases the risk of unintentionally granting excessive access.

References

1. Google Cloud IAM Documentation, "Best practices for using IAM": This document emphasizes the principle of least privilege. It states, "When you grant roles, grant the least permissive roles that are needed." Option B follows this by starting with a limited role and adding only what is necessary.

Source: Google Cloud Documentation > IAM > Best practices for using IAM > "Use the principle of least privilege".

2. Google Cloud IAM Documentation, "Creating and managing custom roles": This guide explains the process of creating custom roles. It mentions, "To create a custom role, you can combine one or more of the available IAM permissions. ... Alternatively, you can copy an existing role and edit its permissions." This validates the method described in option B.

Source: Google Cloud Documentation > IAM > Roles and permissions > Creating and managing custom roles.

3. Google Cloud IAM Documentation, "IAM predefined roles": This reference details the permissions included in predefined roles. The Compute Image User role (roles/compute.imageUser) includes compute.images.list and compute.projects.get, confirming it's a suitable base. The Compute Storage Admin role (roles/compute.storageAdmin) includes permissions like compute.disks.create, confirming it is too permissive for the task.

Source: Google Cloud Documentation > IAM > Roles and permissions > Predefined roles > "Compute Engine roles".

Question 19

Your company wants to migrate their on-premises workloads to Google Cloud. The current on- premises workloads consist of: โ€ข A Flask web application โ€ข AbackendAPI โ€ข A scheduled long-running background job for ETL and reporting. You need to keep operational costs low You want to follow Google-recommended practices to migrate these workloads to serverless solutions on Google Cloud. What should you do?
Options
A: Migrate the web application to App Engine and the backend API to Cloud Run Use Cloud Tasks to run your background job on Compute Engine
B: Migrate the web application to App Engine and the backend API to Cloud Run. Use Cloud Tasks to run your background job on Cloud Run.
C: Run the web application on a Cloud Storage bucket and the backend API on Cloud Run Use Cloud Tasks to run your background job on Cloud Run.
D: Run the web application on a Cloud Storage bucket and the backend API on Cloud Run. Use Cloud Tasks to run your background job on Compute Engine
Show Answer
Correct Answer:
Migrate the web application to App Engine and the backend API to Cloud Run. Use Cloud Tasks to run your background job on Cloud Run.
Explanation
This option presents a fully serverless architecture that aligns with Google-recommended practices for cost-efficiency and operational simplicity. App Engine is a fully managed platform-as-a-service (PaaS) ideal for hosting web applications like Flask, as it handles scaling and infrastructure management automatically. Cloud Run is a serverless container platform perfectly suited for backend APIs, offering a pay-per-use model that scales to zero, minimizing costs. For the scheduled long-running job, using Cloud Run is also the correct serverless approach. Its request timeout can be configured up to 60 minutes, making it suitable for many ETL and reporting tasks, thus avoiding the management overhead and continuous cost of a virtual machine.
Why Incorrect Options are Wrong

A. Using Compute Engine for the background job violates the core requirement to use serverless solutions and introduces unnecessary operational overhead and potentially higher costs compared to a pay-per-use service.

C. Cloud Storage is designed to host static websites (HTML, CSS, JavaScript). A Flask application is dynamic and requires a server-side processing environment, which Cloud Storage does not provide.

D. This option is incorrect for two reasons: it improperly suggests Cloud Storage for a dynamic Flask application and uses the non-serverless Compute Engine for the background job.

References

1. Google Cloud Documentation, "Choosing a computing option": This guide compares compute services. It recommends App Engine for "web applications" and Cloud Run for "web services and APIs." This supports using App Engine for the Flask app and Cloud Run for the API. (See the comparison table under the "Serverless" section).

2. Google Cloud Documentation, "App Engine, Python 3 runtime environment": This page explicitly mentions that the Python runtime is designed to run web servers and supports common frameworks like Flask, confirming its suitability for the web application. (See the "Web frameworks" section).

3. Google Cloud Documentation, "Cloud Run, Request timeouts": "For Cloud Run services, the maximum request timeout is 60 minutes." This documentation confirms that Cloud Run can handle the "long-running background job" requirement within a serverless model, making it a superior choice to a persistent VM. (See the "Setting and updating request timeouts" section).

4. Google Cloud Documentation, "Hosting a static website": "For a dynamic website, consider a compute option to host your site's server-side logic, such as Cloud Run or App Engine." This source directly contrasts static hosting on Cloud Storage with dynamic hosting on services like App Engine, invalidating options C and D.

Question 20

You are building a data lake on Google Cloud for your Internet of Things (loT) application. The loT application has millions of sensors that are constantly streaming structured and unstructured data to your backend in the cloud. You want to build a highly available and resilient architecture based on Google-recommended practices. What should you do?
Options
A: Stream data to Pub/Sub, and use Dataflow to send data to Cloud Storage
B: Stream data to Pub/Sub. and use Storage Transfer Service to send data to BigQuery.
C: Stream data to Dataflow, and use Storage Transfer Service to send data to BigQuery.
D: Stream data to Dataflow, and use Dataprep by Trifacta to send data to Bigtable.
Show Answer
Correct Answer:
Stream data to Pub/Sub, and use Dataflow to send data to Cloud Storage
Explanation
The most appropriate architecture for a highly available and resilient IoT data lake on Google Cloud involves using Pub/Sub for data ingestion, Dataflow for processing, and Cloud Storage as the storage layer. Pub/Sub provides a scalable, durable, and asynchronous messaging service to ingest high-volume streams from millions of sensors, decoupling them from the processing backend. Dataflow is a fully managed service designed for large-scale stream and batch processing, capable of handling both structured and unstructured data. Finally, Cloud Storage is the ideal destination for a data lake, as it offers a cost-effective, highly durable, and scalable repository for storing raw data in its native format. This Pub/Sub -> Dataflow -> Cloud Storage pattern is a standard Google-recommended practice for streaming data pipelines.
Why Incorrect Options are Wrong

B. Storage Transfer Service is used for bulk data transfers from other sources into Cloud Storage, not for processing real-time data streams from Pub/Sub.

C. Dataflow is a processing engine, not an ingestion service. While it can receive data, Pub/Sub should be used as the ingestion buffer. Storage Transfer Service is also used incorrectly here.

D. Dataprep is an interactive tool for visual data preparation, not for automated, high-throughput stream processing. Bigtable is a NoSQL database, whereas Cloud Storage is the standard for a data lake.

References

1. Google Cloud Documentation, "IoT architecture overview": This document outlines common IoT architectures. The "Data processing and analytics" section describes the pattern of using Pub/Sub for ingestion, followed by Dataflow for processing, and then loading data into storage services like Cloud Storage or BigQuery. This directly supports the Pub/Sub -> Dataflow -> Storage flow.

2. Google Cloud Documentation, "Build a data lake on Google Cloud": This guide explicitly states, "Cloud Storage is a scalable, fully-managed, and cost-effective object store for all of your data, including unstructured data. For this reason, Cloud Storage is the ideal service to use as the central storage repository for your data lake." This confirms Cloud Storage as the correct destination.

3. Google Cloud Documentation, "Dataflow overview": The documentation highlights Dataflow's capability to "process data from a variety of sources, such as Pub/Sub and Cloud Storage." It positions Dataflow as the processing layer between ingestion (Pub/Sub) and storage.

4. Google Cloud Documentation, "Storage Transfer Service overview": This page clarifies the service's purpose: "Storage Transfer Service lets you move large amounts of data...to a Cloud Storage bucket." This confirms it is not a tool for processing streaming data from Pub/Sub.

Question 21

You installed the Google Cloud CLI on your workstation and set the proxy configuration. However, you are worried that your proxy credentials will be recorded in the gcloud CLI logs. You want to prevent your proxy credentials from being logged What should you do?
Options
A: Configure username and password by using gcloud configure set proxy/username and gcloud configure set proxy/ proxy/password commands.
B: Encode username and password in sha256 encoding, and save it to a text file. Use filename as a value in the gcloud configure set core/custom_ca_certs_file command.
C: Provide values for CLOUDSDK_USERNAME and CLOUDSDK_PASSWORD in the gcloud CLI tool configure file.
D: Set the CLOUDSDK_PROXY_USERNAME and CLOUDSDK_PROXY PASSWORD properties by using environment variables in your command line tool.
Show Answer
Correct Answer:
Set the CLOUDSDK_PROXY_USERNAME and CLOUDSDK_PROXY PASSWORD properties by using environment variables in your command line tool.
Explanation
The Google Cloud CLI documentation explicitly recommends using environment variables for sensitive values, such as proxy credentials. Setting CLOUDSDKPROXYUSERNAME and CLOUDSDKPROXYPASSWORD as environment variables prevents these credentials from being written to configuration files in plain text or being captured in shell history or gcloud logs. This method is the prescribed secure practice for handling proxy authentication with the gcloud CLI, directly addressing the user's concern about credentials being logged.
Why Incorrect Options are Wrong

A. Using gcloud config set stores the password in plain text in the user's configuration file, which is insecure and precisely what the user wants to avoid.

B. This is incorrect because sha256 is a one-way hashing algorithm, not an encoding for authentication, and the core/customcacertsfile property is for specifying custom SSL certificates, not proxy credentials.

C. The environment variable names are incorrect (CLOUDSDKUSERNAME vs. CLOUDSDKPROXYUSERNAME), and placing credentials directly in a configuration file is an insecure practice.

References

1. Google Cloud SDK Documentation, gcloud topic configurations: Under the "Available Properties" section for proxy/password, the documentation states: "For security reasons, it is recommended to use the CLOUDSDKPROXYPASSWORD environment variable to set the proxy password instead of this property." This directly supports using environment variables for security.

2. Google Cloud SDK Documentation, gcloud config set: The documentation for this command includes a note: "The values of properties that you set with gcloud config set are stored in your user config file... For this reason, you should not use gcloud config set to set properties that contain secrets, such as proxy passwords." This explicitly advises against the method described in option A.

3. Google Cloud SDK Documentation, Installing the Google Cloud CLI, "Configuring the gcloud CLI" section: This section details the use of environment variables for configuration, mentioning that they override values set in configuration files. The CLOUDSDKPROXYPASSWORD variable is listed as the method for providing a proxy password.

Question 22

Your company developed an application to deploy on Google Kubernetes Engine. Certain parts of the application are not fault-tolerant and are allowed to have downtime Other parts of the application are critical and must always be available. You need to configure a Goorj e Kubernfl:es Engine duster while optimizing for cost. What should you do?
Options
A: Create a cluster with a single node-pool by using standard VMs. Label the fault-tolerant Deployments as spot-true.
B: Create a cluster with a single node-pool by using Spot VMs. Label the critical Deployments as spot- false.
C: Create a cluster with both a Spot W node pool and a rode pool by using standard VMs Deploy the critical. deployments on the Spot VM node pool and the fault; tolerant deployments on the node pool by using standard VMs.
D: Create a cluster with both a Spot VM node pool and by using standard VMs. Deploy the critical deployments on the mode pool by using standard VMs and the fault-tolerant deployments on the Spot VM node pool.
Show Answer
Correct Answer:
Create a cluster with both a Spot VM node pool and by using standard VMs. Deploy the critical deployments on the mode pool by using standard VMs and the fault-tolerant deployments on the Spot VM node pool.
Explanation
This strategy correctly aligns the infrastructure with the application's requirements while optimizing for cost. Critical, stateful, or long-running workloads that cannot tolerate interruption should be placed on a node pool with standard VMs, which provide high availability. Fault-tolerant, stateless, or batch workloads that can handle preemption are ideal candidates for Spot VMs. Spot VMs offer significant cost savings but can be reclaimed at any time. By creating two distinct node pools and scheduling the appropriate workloads to each, the solution achieves both high availability for critical services and cost-efficiency for non-critical ones.
Why Incorrect Options are Wrong

A. This approach fails to optimize costs because it exclusively uses more expensive standard VMs. A Kubernetes label alone does not enable the use of Spot VMs.

B. This configuration jeopardizes the availability of critical applications by running them on a node pool composed entirely of Spot VMs, which can be preempted at any time.

C. This option incorrectly reverses the deployment logic. It places critical deployments on unstable Spot VMs and fault-tolerant deployments on expensive standard VMs, failing both availability and cost-optimization goals.

References

1. Google Cloud Documentation, GKE Concepts, "Spot VMs": "Spot VMs are a good fit for running stateless, batch, or fault-tolerant workloads. They are not recommended for workloads that are not fault-tolerant, such as stateful applications." This directly supports placing fault-tolerant workloads on Spot VMs and implies critical workloads should be elsewhere.

2. Google Cloud Documentation, GKE Concepts, "About node pools": "Node pools are a subset of nodes within a cluster that all have the same configuration... For example, you might create a node pool in your cluster with Spot VMs for running fault-tolerant workloads and another node pool with standard VMs for workloads that require higher availability." This describes the exact architecture proposed in the correct answer.

3. Google Cloud Documentation, GKE How-to guides, "Separate workloads in GKE": This guide explains the use of node taints and tolerations to control scheduling. To implement the solution, you would add a taint to the Spot VM node pool (e.g., cloud.google.com/gke-spot=true:NoSchedule) and add a corresponding toleration only to the fault-tolerant deployments, ensuring they are scheduled there while critical workloads are placed on the standard node pool.

Question 23

You need to deploy an application in Google Cloud using savorless technology. You want to test a new version of the application with a small percentage of production traffic. What should you do?
Options
A: Deploy the application lo Cloud. Run. Use gradual rollouts for traffic spelling.
B: Deploy the application lo Google Kubemetes Engine. Use Anthos Service Mesh for traffic splitting.
C: Deploy the application to Cloud functions. Saucily the version number in the functions name.
D: Deploy the application to App Engine. For each new version, create a new service.
Show Answer
Correct Answer:
Deploy the application lo Cloud. Run. Use gradual rollouts for traffic spelling.
Explanation
Cloud Run is a fully managed serverless platform that enables you to run stateless containers. It is designed for deploying and scaling applications quickly. A key feature of Cloud Run is its native support for revisions and traffic management. You can deploy a new version of your application as a new revision and configure traffic splitting to direct a specific percentage of incoming requests to it. This allows for gradual rollouts (canary deployments), where the new version is tested with a small amount of live production traffic before being promoted to serve all traffic, directly addressing the scenario's requirements.
Why Incorrect Options are Wrong

B. Google Kubernetes Engine (GKE) is a container orchestration service, not a serverless platform in its standard mode. It requires management of the underlying cluster infrastructure.

C. While Cloud Functions is serverless, managing traffic by changing the function's name is not a supported or effective method for splitting production traffic between versions.

D. In App Engine, traffic is split between different versions within a single service. Creating a new service for each new version is an incorrect approach for canary testing.

References

1. Google Cloud Documentation - Cloud Run, "Rollbacks, gradual rollouts, and traffic migration": "Gradual rollouts (also sometimes called canary deployments) are a feature that allow you to slowly migrate traffic to a new revision... Cloud Run allows you to split traffic between multiple revisions." This document explicitly details the feature requested in the question.

2. Google Cloud Documentation - "Choosing a serverless option": This page categorizes Cloud Run, Cloud Functions, and App Engine as Google's core serverless compute platforms, which confirms that GKE (Option B) is not the intended serverless solution for this question.

3. Google Cloud Documentation - App Engine, "Splitting Traffic": "After you deploy two or more versions to a service, you can split traffic between them... For example, you can route 5% of traffic to a new version to test it in a production environment." This confirms that traffic splitting in App Engine happens between versions, not by creating new services (Option D).

Question 24

Your company's security vulnerability management policy wonts 3 member of the security team to have visibility into vulnerabilities and other OS metadata for a specific Compute Engine instance This Compute Engine instance hosts a critical application in your Goggle Cloud project. You need to implement your company's security vulnerability management policy. What should you dc?
Options
A: โ€ข Ensure that the Ops Agent Is Installed on the Compute Engine instance. โ€ข Create a custom metric in the Cloud Monitoring dashboard. โ€ข Provide the security team member with access to this dashboard.
B: โ€ข Ensure that the Ops Agent is installed on tie Compute Engine instance. โ€ข Provide the security team member roles/configure.inventoryViewer permission.
C: โ€ข Ensure that the OS Config agent Is Installed on the Compute Engine instance. โ€ข Provide the security team member roles/configure.vulnerabilityViewer permission.
D: โ€ข Ensure that the OS Config agent is installed on the Compute Engine instance โ€ข Create a log sink Co a BigQuery dataset. โ€ข Provide the security team member with access to this dataset.
Show Answer
Correct Answer:
โ€ข Ensure that the OS Config agent Is Installed on the Compute Engine instance. โ€ข Provide the security team member roles/configure.vulnerabilityViewer permission.
Explanation
The Google Cloud VM Manager suite is the service designed to manage operating systems for large VM fleets. A key component is the OS Config agent, which must be installed on the Compute Engine instance to collect OS inventory data. This data is then used by VM Manager to generate vulnerability reports. To grant the security team visibility into these specific reports without granting excessive permissions, you should assign them the predefined Identity and Access Management (IAM) role roles/osconfig.vulnerabilityViewer. This role provides the necessary read-only access to vulnerability report data, directly fulfilling the policy requirement with the principle of least privilege.
Why Incorrect Options are Wrong

A. The Ops Agent is used for collecting telemetry (metrics and logs) for Cloud Monitoring and Cloud Logging, not for generating OS vulnerability reports.

B. The Ops Agent is the incorrect agent for vulnerability management. The OS Config agent is required for the VM Manager service to function.

D. While the OS Config agent is correct, routing this data via a log sink to BigQuery is an unnecessarily complex and non-standard method for viewing vulnerability reports.

References

1. Google Cloud Documentation, VM Manager Overview: "VM Manager is a suite of tools that can be used to manage operating systems for large virtual machine (VM) fleets running Windows and Linux on Compute Engine. VM Manager helps drive efficiency by automating and reducing the manual effort of otherwise time-consuming tasks... To use VM Manager, you need to set up VM Manager, which includes installing the OS Config agent". This confirms the OS Config agent is the correct component.

2. Google Cloud Documentation, Viewing vulnerability reports: "To view vulnerability reports for a project, you must have the osconfig.vulnerabilityReports.get or osconfig.vulnerabilityReports.list IAM permission." This section details the process which relies on the OS Config agent and appropriate permissions.

3. Google Cloud Documentation, OS Config, Control access: "Vulnerability Report Viewer (roles/osconfig.vulnerabilityViewer): Provides read-only access to vulnerability report data." This document explicitly defines the role and its purpose, matching the requirements of the question.

Question 25

You are planning to migrate your on-premises data to Google Cloud. The data includes: โ€ข 200 TB of video files in SAN storage โ€ข Data warehouse data stored on Amazon Redshift โ€ข 20 GB of PNG files stored on an S3 bucket You need to load the video files into a Cloud Storage bucket, transfer the data warehouse data into BigQuery, and load the PNG files into a second Cloud Storage bucket. You want to follow Google- recommended practices and avoid writing any code for the migration. What should you do?
Options
A: Use gcloud storage for the video files. Dataflow for the data warehouse data, and Storage Transfer Service for the PNG files.
B: Use Transfer Appliance for the videos. BigQuery Data Transfer Service for the data warehouse data, and Storage Transfer Service for the PNG files.
C: Use Storage Transfer Service for the video files, BigQuery Data Transfer Service for the data warehouse data, and Storage Transfer Service for the PNG files.
D: Use Cloud Data Fusion for the video files, Dataflow for the data warehouse data, and Storage Transfer Service for the PNG files.
Show Answer
Correct Answer:
Use Transfer Appliance for the videos. BigQuery Data Transfer Service for the data warehouse data, and Storage Transfer Service for the PNG files.
Explanation
This solution selects the most appropriate, Google-recommended, no-code service for each specific migration task based on the data type and scale. 200 TB of video files: For petabyte-scale data transfers (typically >20 TB) from an on-premises location, Transfer Appliance is the recommended practice, especially when online transfer over a typical network would be slow and time-consuming. It is a physical appliance for secure, offline data shipping. Amazon Redshift data: BigQuery Data Transfer Service is the purpose-built, fully managed, and no-code service designed specifically to automate data migration from data warehouses like Amazon Redshift directly into BigQuery. 20 GB of PNG files: Storage Transfer Service is the ideal managed service for online transfers from other cloud providers, such as Amazon S3, to Cloud Storage.
Why Incorrect Options are Wrong

A. Dataflow requires writing code using the Apache Beam SDK, which violates a core requirement. Additionally, gcloud storage is not the recommended tool for a 200 TB migration due to potential issues with network reliability and performance over long periods.

C. While Storage Transfer Service can be used for on-premises transfers, Transfer Appliance (Option B) is the more suitable and commonly recommended solution for a 200 TB dataset, as an online transfer of this magnitude is often impractical without a dedicated high-speed connection.

D. Cloud Data Fusion is an ETL/ELT data integration service, not a primary tool for bulk file migration. Dataflow violates the no-code requirement.

References

1. Transfer Appliance for 200 TB: Google Cloud's official documentation on data transfer options explicitly recommends Transfer Appliance for transfers from on-premises locations involving more than 20 TB of data, especially when the transfer would take more than one week online.

Source: Google Cloud Documentation, "Choosing a transfer option", section "Data from an on-premises location".

2. BigQuery Data Transfer Service for Redshift: The official documentation identifies this service as the primary method for migrating from Amazon Redshift.

Source: Google Cloud Documentation, "BigQuery Data Transfer Service", section "Amazon Redshift transfers".

3. Storage Transfer Service for S3: The documentation recommends Storage Transfer Service for large-scale data transfers from other cloud providers like Amazon S3.

Source: Google Cloud Documentation, "Storage Transfer Service", section "Overview".

4. Dataflow vs. No-Code: The Dataflow documentation clearly states that development involves using the Apache Beam SDKs, which is a coding activity.

Source: Google Cloud Documentation, "Dataflow", section "Develop".

Question 26

Your application is running on Google Cloud in a managed instance group (MIG). You see errors in Cloud Logging for one VM that one of the processes is not responsive. You want to replace this VM in the MIG quickly. What should you do?
Options
A: Select the MIG from the Compute Engine console and, in the menu, select Replace VMs.
B: Use the gcloud compute instance-groups managed recreate-instances command to recreate theVM.
C: Use the gcloud compute instances update command with a REFRESH action for the VM.
D: Update and apply the instance template of the MIG.
Show Answer
Correct Answer:
Use the gcloud compute instance-groups managed recreate-instances command to recreate theVM.
Explanation
The most direct and precise method to replace a specific, unresponsive virtual machine within a Managed Instance Group (MIG) is by using the gcloud compute instance-groups managed recreate-instances command. This command allows you to target one or more specific instances by name for deletion and subsequent recreation based on the MIG's current instance template. This action is quick, targeted, and resolves the issue without affecting the other healthy instances in the group.
Why Incorrect Options are Wrong

A. While you can recreate an instance from the Google Cloud console, the terminology "Replace VMs" is imprecise. The correct action in the console for a specific instance is labeled "Recreate," making this option less accurate than option B.

C. The gcloud compute instances update command is used for standalone VM instances, not for instances that are part of a managed instance group. Attempting to modify a MIG-managed instance directly is incorrect.

D. Updating and applying the instance template of the MIG initiates a rolling update across the entire group. This is a much larger operation intended for deploying new configurations, not for quickly fixing a single failed VM.

References

1. Official Google Cloud Documentation - Manually recreating instances in a MIG: "You can selectively recreate one or more instances in a managed instance group (MIG). Recreating an instance deletes the existing instance and creates a new one with the same name... Use the gcloud compute instance-groups managed recreate-instances command." This document explicitly states that recreate-instances is the command for this task.

2. Official Google Cloud SDK Documentation - gcloud compute instance-groups managed recreate-instances: The command reference details its usage: "schedules a recreation of one or more virtual machine instances in a managed instance group." This confirms it is the specific tool for the described scenario.

3. Official Google Cloud Documentation - Applying new configurations to VMs in a MIG: "To apply a new configuration to existing VMs in a MIG, you can... set up a rolling update." This shows that updating the template (Option D) is for configuration changes, not for fixing a single failed instance.

4. Official Google Cloud SDK Documentation - gcloud compute instances update: The documentation for this command focuses on updating metadata, machine type, and other properties of a standalone instance, confirming it is not the correct tool for managing instances within a MIG.

Question 27

You are working in a team that has developed a new application that needs to be deployed on Kubernetes. The production application is business critical and should be optimized for reliability. You need to provision a Kubernetes cluster and want to follow Google-recommended practices. What should you do?
Options
A: Create a GKE Autopilot cluster. Enroll the cluster in the rapid release channel.
B: Create a GKE Autopilot cluster. Enroll the cluster in the stable release channel.
C: Create a zonal GKE standard cluster. Enroll the cluster in the stable release channel.
D: Create a regional GKE standard cluster. Enroll the cluster in the rapid release channel.
Show Answer
Correct Answer:
Create a GKE Autopilot cluster. Enroll the cluster in the stable release channel.
Explanation
For a business-critical application requiring high reliability, GKE Autopilot is the recommended approach. Autopilot clusters are regional by default, providing high availability by distributing the control plane and nodes across multiple zones. This architecture protects against zonal failures. Enrolling the cluster in the stable release channel ensures it runs on the most mature and thoroughly tested GKE version, which is crucial for production environments where stability is prioritized over early access to new features. This combination aligns perfectly with Google's recommended practices for deploying reliable, production-grade applications.
Why Incorrect Options are Wrong

A. The rapid release channel is not recommended for business-critical production workloads as it may contain bugs and receives less testing, prioritizing new features over stability.

C. A zonal cluster is a single point of failure. If its zone experiences an outage, the entire cluster becomes unavailable, which is unacceptable for a business-critical application.

D. While a regional cluster is correct for reliability, the rapid release channel is inappropriate for a production environment that must be optimized for stability.

References

1. GKE Autopilot Overview: "Autopilot clusters are regional, which means the control plane and nodes are spread across multiple zones in a region. This provides higher availability than zonal clusters."

Source: Google Cloud Documentation, "Autopilot overview".

2. Release Channels: "The Stable channel...is recommended for production clusters that require the highest level of stability...Updates on this channel have passed all internal Google Cloud testing and have been qualified for production."

Source: Google Cloud Documentation, "Release channels".

3. Cluster Availability (Regional vs. Zonal): "Regional clusters increase the availability of your applications by replicating the control plane and nodes across multiple zones in a region...For production workloads, we recommend regional clusters."

Source: Google Cloud Documentation, "Regional clusters".

4. GKE Best Practices: "Use release channels to balance stability and features...For production clusters, we recommend the Stable or Regular channel." and "For production workloads, use regional clusters for higher availability."

Source: Google Cloud Documentation, "Best practices for running cost-optimized Kubernetes applications on GKE", sections "Use release channels to balance stability and features" and "Use regional clusters for higher availability".

Question 28

Your company requires all developers to have the same permissions, regardless of the Google Cloud project they are working on. Your company's security policy also restricts developer permissions to Compute Engine. Cloud Functions, and Cloud SQL. You want to implement the security policy with minimal effort. What should you do?
Options
A: โ€ข Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL permissions in one project within the Google Cloud organization. โ€ข Copy the role across all projects created within the organization with the gcloud iam roles copy command. โ€ข Assign the role to developers in those projects.
B: โ€ข Add all developers to a Google group in Google Groups for Workspace. โ€ข Assign the predefined role of Compute Admin to the Google group at the Google Cloud organization level.
C: โ€ข Add all developers to a Google group in Cloud Identity. โ€ข Assign predefined roles for Compute Engine, Cloud Functions, and Cloud SQL permissions to the Google group for each project in the Google Cloud organization.
D: โ€ข Add all developers to a Google group in Cloud Identity. โ€ข Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL permissions at the Google Cloud organization level. โ€ข Assign the custom role to the Google group.
Show Answer
Correct Answer:
โ€ข Add all developers to a Google group in Cloud Identity. โ€ข Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL permissions at the Google Cloud organization level. โ€ข Assign the custom role to the Google group.
Explanation
This solution is the most efficient and scalable. Using a Google Group (managed via Cloud Identity or Google Workspace) centralizes user management. Creating a single custom role at the organization level allows you to define the specific set of required permissions (Compute Engine, Cloud Functions, Cloud SQL) once. This organization-level role is then available for use in all projects within the organization. By assigning this custom role to the Google Group at the organization level, all developers in the group inherit the exact same permissions across all projects automatically. This "define once, apply everywhere" model perfectly aligns with the requirements for consistency and minimal effort.
Why Incorrect Options are Wrong

A. Copying a custom role to every project is a manual, repetitive task that does not scale and violates the "minimal effort" principle. It creates management overhead for each new project.

B. The predefined Compute Admin role only grants permissions for Compute Engine. It does not include the required permissions for Cloud Functions or Cloud SQL, thus failing to meet the policy.

C. Assigning multiple roles to the group for each project is inefficient. This requires repetitive configuration for every project and is prone to inconsistencies, contradicting the "minimal effort" requirement.

References

1. Organization-level Custom Roles: Google Cloud documentation states, "If you want to create a custom role that can be used to grant access to resources in any project in your organization, you can create the role at the organization level." This supports creating the role once for all projects.

Source: Google Cloud Documentation, "IAM custom roles," section "Custom role availability."

2. Using Groups for Role Management: Google Cloud's best practices recommend using groups to manage principals. "We recommend that you grant roles to groups instead of individual users... you can adjust group membership, and the policy bindings update automatically. This practice makes policy management simpler and less error-prone..."

Source: Google Cloud Documentation, "Policy Troubleshooter overview," section "Best practices for using Identity and Access Management."

3. IAM Inheritance: Policies are inherited down the resource hierarchy. "When you set a policy at a high level in the resource hierarchy, such as the organization or folder level, the access grant is inherited by all resources under it." Assigning the role to the group at the organization level ensures it applies to all projects.

Source: Google Cloud Documentation, "Understanding the resource hierarchy," section "IAM policy inheritance."

4. Predefined Role Limitations: The official documentation for the Compute Admin role (roles/compute.admin) lists permissions related only to Compute Engine resources, confirming it does not grant access to Cloud Functions or Cloud SQL.

Source: Google Cloud Documentation, "IAM basic and predefined roles reference," section "Compute Engine roles."

Question 29

You used the gcloud container clusters command to create two Google Cloud Kubernetes (GKE) clusters prod-cluster and dev-cluster. โ€ข prod-cluster is a standard cluster. โ€ข dev-cluster is an auto-pilot duster. When you run the Kubect1 get nodes command, you only see the nodes from prod-cluster Which commands should you run to check the node status for dev-cluster?

Options
A:

A. Real Google Associate Cloud Engineer exam question

B:

B. Real Google Associate Cloud Engineer exam question

C:

C. Real Google Associate Cloud Engineer exam question

D:

D. Real Google Associate Cloud Engineer exam question

E:

A. Option A

F:

B. Option B

G:

C. Option C

H:

D. Option D

Show Answer
Correct Answer:
C. https://kxbjsyuhceggsyvxdkof.supabase.co/storage/v1/object/public/file-images/Real_Google_Associate-Cloud-Engineer/page_150_img_3.jpg
Explanation
The kubectl command-line tool operates on the cluster defined by the current-context in its configuration file (kubeconfig). The scenario indicates that the current context is set to prod-cluster. To interact with dev-cluster, the context must be switched. The command gcloud container clusters get-credentials [CLUSTERNAME] is specifically designed for this purpose. It fetches the endpoint and authentication data for the specified GKE cluster and automatically configures kubectl by updating the kubeconfig file to use that cluster as the current context. The command requires a location flag (--region or --zone). Therefore, running gcloud container clusters get-credentials dev-cluster --region us-central1-a correctly updates the context, allowing the subsequent kubectl get nodes command to target dev-cluster.
Why Incorrect Options are Wrong

A: The gcloud config set commands only modify the default properties for the gcloud tool itself; they do not alter the active kubectl context.

B: This command sets the default cluster for subsequent gcloud container commands but does not update the kubeconfig file used by kubectl.

D: This command is syntactically incomplete. gcloud container clusters get-credentials requires a location flag (--zone or --region) unless a default is already set in the gcloud configuration.

References

1. Google Cloud Documentation, "Configuring cluster access for kubectl": This guide explicitly states, "To configure kubectl to point to a GKE cluster, use the gcloud container clusters get-credentials command." It provides the command syntax, which includes the cluster name and a location flag.

Source: Google Cloud, GKE Documentation, How-to guides, "Configuring cluster access for kubectl". Section: "Generate a kubeconfig entry".

2. Google Cloud SDK Documentation, gcloud container clusters get-credentials: The official reference for the command confirms its function: "gcloud container clusters get-credentials updates a kubeconfig file with credentials and endpoint information for a cluster in GKE." The documentation also lists CLUSTERNAME and a location flag (--region or --zone) as required arguments.

Source: Google Cloud SDK Documentation, gcloud container clusters get-credentials.

3. Google Cloud SDK Documentation, gcloud config set: This documentation clarifies that gcloud config set is used to "Set a property in your active configuration" for the gcloud command-line tool, such as compute/zone or core/project. It does not mention any interaction with or modification of the kubectl configuration.

Source: Google Cloud SDK Documentation, gcloud config set.

Question 30

You have a Bigtable instance that consists of three nodes that store personally identifiable information (Pll) dat a. You need to log all read or write operations, including any metadata or configuration reads of this database table, in your company's Security Information and Event Management (SIEM) system. What should you do?
Options
A: โ€ข Navigate to Cloud Mentioning in the Google Cloud console, and create a custom monitoring job for the Bigtable instance to track all changes. โ€ข Create an alert by using webhook endpoints. with the SIEM endpoint as a receiver
B: โ€ข Navigate to the Audit Logs page in the Google Cloud console, and enable Data Read. Data Write and Admin Read logs for the Bigtable instance โ€ข Create a Pub/Sub topic as a Cloud Logging sink destination, and add your SIEM as a subscriber to the topic.
C: โ€ข Install the Ops Agent on the Bigtable instance during configuration. K โ€ข Create a service account with read permissions for the Bigtable instance. โ€ข Create a custom Dataflow job with this service account to export logs to the company's SIEM system.
D: โ€ข Navigate to the Audit Logs page in the Google Cloud console, and enable Admin Write logs for the Biglable instance. โ€ข Create a Cloud Functions instance to export logs from Cloud Logging to your SIEM.
Show Answer
Correct Answer:
โ€ข Navigate to the Audit Logs page in the Google Cloud console, and enable Data Read. Data Write and Admin Read logs for the Bigtable instance โ€ข Create a Pub/Sub topic as a Cloud Logging sink destination, and add your SIEM as a subscriber to the topic.
Explanation
The requirement is to log all data access (read/write) and administrative read operations for a Bigtable instance and send them to a SIEM. Google Cloud Audit Logs are the designated service for this purpose. Specifically, Data Access audit logs (which include Data Read and Data Write) and Admin Activity audit logs (which include Admin Read) must be enabled for the Bigtable service. Data Access logs are disabled by default and must be explicitly enabled. The most robust and scalable method to export these logs to an external system like a SIEM is by configuring a Cloud Logging sink. A sink with a Pub/Sub topic as its destination allows for a decoupled, real-time streaming architecture. The SIEM system can then be configured as a subscriber to this Pub/Sub topic to ingest the log entries.
Why Incorrect Options are Wrong

A: Cloud Monitoring is for performance metrics and infrastructure health, not for detailed audit logging of user and data activities. It cannot capture the specifics of read/write operations as required.

C: Bigtable is a fully managed Google Cloud service. Users do not have access to the underlying infrastructure, making it impossible to install an Ops Agent or any other software on the nodes.

D: This option is incomplete. Enabling only Admin Write logs fails to capture the required Data Read, Data Write, and Admin Read operations, which are critical for auditing access to PII data.

References

1. Cloud Audit Logs Overview: The official documentation specifies the different types of audit logs. "Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data." This directly maps to the question's requirement.

Source: Google Cloud Documentation, "Cloud Audit Logs overview", Section: "Types of audit logs".

2. Configuring Data Access Audit Logs: This guide details the process of enabling Data Access audit logs, which are typically disabled by default. The steps involve using the Google Cloud console's IAM & Admin > Audit Logs page to enable the required log types (Data Read, Data Write) for the specific service (Bigtable).

Source: Google Cloud Documentation, "Configure Data Access audit logs", Section: "Configuring with the Google Cloud console".

3. Exporting Logs with Sinks: The documentation outlines sinks as the primary method for routing log entries to supported destinations, including Pub/Sub. "To export log entries from Logging, you create a sink... The sink includes a destination and a filter that selects the log entries to export." This supports using a sink to send logs to a Pub/Sub topic for SIEM integration.

Source: Google Cloud Documentation, "Overview of log exports", Section: "Sinks".

4. Bigtable as a Managed Service: The Bigtable product overview describes it as a "fully managed, scalable NoSQL database service." This classification as "fully managed" implies that Google handles the underlying infrastructure, and users cannot install agents or access the host machines.

Source: Google Cloud Documentation, "Cloud Bigtable overview".

Question 31

You have an on-premises data analytics set of binaries that processes data files in memory for about 45 minutes every midnight. The sizes of those data files range from 1 gigabyte to 16 gigabytes. You want to migrate this application to Google Cloud with minimal effort and cost. What should you do?
Options
A: Upload the code to Cloud Functions. Use Cloud Scheduler to start the application.
B: Create a container for the set of binaries. Use Cloud Scheduler to start a Cloud Run job for the container.
C: Create a container for the set of binaries Deploy the container to Google Kubernetes Engine (GKE) and use the Kubernetes scheduler to start the application.
D: Lift and shift to a VM on Compute Engine. Use an instance schedule to start and stop the instance.
Show Answer
Correct Answer:
Create a container for the set of binaries. Use Cloud Scheduler to start a Cloud Run job for the container.
Explanation
Cloud Run jobs are specifically designed for executing containerized applications that run a task to completion and then exit. This model perfectly fits the described workload of a batch analytics process. By containerizing the existing binaries, the application can be migrated with minimal code changes ("minimal effort"). The pricing model for Cloud Run jobs is pay-per-use, meaning you are only billed for the CPU and memory consumed during the 45-minute execution. This, combined with the serverless nature (no infrastructure to manage), makes it the most cost-effective and lowest-effort solution among the choices.
Why Incorrect Options are Wrong

A. Cloud Functions would require refactoring the binaries into a function, which contradicts the "minimal effort" requirement. Functions are better suited for event-driven, lighter-weight code, not long-running binary processes.

C. Google Kubernetes Engine (GKE) introduces significant operational overhead and cost. You would need to manage a cluster and pay for the underlying nodes, which is not cost-effective for a job running only 45 minutes daily.

D. A Compute Engine VM is a viable "lift and shift" option, but it is less cost-effective. Even with an instance schedule, you incur costs for the persistent disk 24/7, making it more expensive than Cloud Run's per-second billing model.

References

1. Google Cloud Documentation - Cloud Run Jobs: "Jobs are used to run code that performs a set of tasks and then quits when the work is done. This is in contrast to services, which run continuously to respond to web requests. Jobs are ideal for tasks like database migrations, or other batch processing jobs." This directly aligns with the use case in the question.

Source: Google Cloud Documentation, "What is Cloud Run", Section: "Services and jobs".

2. Google Cloud Documentation - Choosing a compute option: This guide helps select the right service. For containerized applications that are batch jobs, Cloud Run is recommended for its simplicity and cost-effectiveness over GKE, which is suited for complex microservices orchestration.

Source: Google Cloud Architecture Center, "Choosing the right compute option: a guide to Google Cloud products".

3. Google Cloud Documentation - Cloud Run Pricing: "Cloud Run jobs pricing is based on the resources your job uses... You are billed for the CPU and memory allocated to your job's tasks, with per-second granularity." This supports the "minimal cost" argument.

Source: Google Cloud Documentation, "Cloud Run pricing", Section: "Jobs pricing".

4. Google Cloud Documentation - Compute Engine Pricing: "Persistent disk storage is charged for the amount of provisioned space for each VM... even if they are stopped." This confirms that option D would incur persistent costs, making it more expensive than option B.

Source: Google Cloud Documentation, "Compute Engine pricing", Section: "Persistent Disk".

Question 32

You are in charge of provisioning access for all Google Cloud users in your organization. Your company recently acquired a startup company that has their own Google Cloud organization. You need to ensure that your Site Reliability Engineers (SREs) have the same project permissions in the startup company's organization as in your own organization. What should you do?
Options
A: In the Google Cloud console for your organization, select Create role from selection, and choose destination as the startup company's organization
B: In the Google Cloud console for the startup company, select Create role from selection and choose source as the startup company's Google Cloud organization.
C: Use the gcloud iam roles copy command, and provide the Organization ID of the startup company's Google Cloud Organization as the destination.
D: Use the gcloud iam roles copy command, and provide the project IDs of all projects in the startup company s organization as the destination.
Show Answer
Correct Answer:
Use the gcloud iam roles copy command, and provide the Organization ID of the startup company's Google Cloud Organization as the destination.
Explanation
The most efficient and scalable method to replicate a custom IAM role from one Google Cloud organization to another is by using the gcloud iam roles copy command. By specifying the source role from your current organization and setting the --destination-organization flag to the startup's organization ID, you create an identical, centrally-managed role in the new organization. This new organization-level custom role can then be granted to the SREs on any project within the startup's organization, ensuring consistent permissions and simplifying future management. This approach avoids the manual, error-prone, and unscalable process of copying the role to each project individually.
Why Incorrect Options are Wrong

A. The Google Cloud console's "Create role from selection" feature is used for creating a new custom role within the same project or organization; it does not support copying roles to a different organization.

B. The source of the custom role is your original organization, not the startup's. Furthermore, the console UI is not the correct tool for copying roles between organizations.

D. Copying the role to every individual project is highly inefficient and creates a significant management burden. This approach is not scalable, as the role would need to be copied again for any new projects.

References

1. Google Cloud SDK Documentation, gcloud iam roles copy: The official documentation for the command explicitly provides flags for specifying a source and destination organization. The command gcloud iam roles copy --source-organization=SOURCEORGANIZATIONID --destination-organization=DESTINATIONORGANIZATIONID is the direct method for this task.

Source: Google Cloud SDK Documentation, gcloud iam roles copy, Command Flags section. https://cloud.google.com/sdk/gcloud/reference/iam/roles/copy

2. Google Cloud IAM Documentation, "Creating and managing custom roles": This document explains the scope of custom roles. It states, "If you want the custom role to be available for any project in an organization, create the role at the organization level." Copying the role to the destination organization (Option C) achieves this, whereas copying to individual projects (Option D) does not follow this best practice.

Source: Google Cloud IAM Documentation, "Creating and managing custom roles", section "Custom role availability". https://cloud.google.com/iam/docs/creating-custom-roles#custom-role-availability

3. Google Cloud IAM Documentation, "Copying an existing role": This section details the process for duplicating roles and explicitly recommends using the gcloud iam roles copy command for copying a role to another project or organization.

Source: Google Cloud IAM Documentation, "Creating and managing custom roles", section "Copying an existing role". https://cloud.google.com/iam/docs/creating-custom-roles#copying-role

Question 33

After a recent security incident, your startup company wants better insight into what is happening in the Google Cloud environment. You need to monitor unexpected firewall changes and instance creation. Your company prefers simple solutions. What should you do?
Options
A: Use Cloud Logging filters to create log-based metrics for firewall and instance actions. Monitor the changes and set up reasonable alerts.
B: Install Kibana on a compute Instance. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Pub/Sub. Target the Pub/Sub topic to push messages to the Kibana instance. Analyze the logs on Kibana in real time.
C: Turn on Google Cloud firewall rules logging, and set up alerts for any insert, update, or delete events.
D: Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Cloud Storage. Use BigQuery to periodically analyze log events in the storage bucket.
Show Answer
Correct Answer:
Use Cloud Logging filters to create log-based metrics for firewall and instance actions. Monitor the changes and set up reasonable alerts.
Explanation
The most direct and simplest solution is to use the integrated features of Google Cloud's operations suite. Cloud Audit Logs automatically capture administrative activities like firewall rule changes and VM instance creation. By creating filters in Cloud Logging for these specific events, you can define log-based metrics in Cloud Monitoring. These metrics track the occurrence of the specified events. You can then easily create alerting policies in Cloud Monitoring that trigger notifications whenever these metrics cross a defined threshold (e.g., more than zero occurrences), providing immediate insight into unexpected activities. This approach is fully managed, requires no additional infrastructure, and directly meets the company's preference for simplicity.
Why Incorrect Options are Wrong

B: This solution is overly complex. It requires provisioning and managing a Compute Engine instance, installing and configuring a third-party stack (Kibana), and setting up a multi-service pipeline. This contradicts the requirement for a simple solution.

C: Firewall Rules Logging records network traffic connections that are allowed or denied by firewall rules. It does not log the creation, modification, or deletion of the firewall rules themselves, nor does it address instance creation.

D: This approach is designed for long-term storage and periodic, batch analysis, not for real-time monitoring and alerting. While powerful for forensics, it does not provide the immediate insight needed to respond to ongoing security events.

References

1. Google Cloud Documentation, Cloud Logging, "Overview of log-based metrics": "Log-based metrics are Cloud Monitoring metrics that are derived from the content of log entries. ... You can use log-based metrics to create charts and alerting policies in Cloud Monitoring." This document confirms the core mechanism of option A.

2. Google Cloud Documentation, Cloud Monitoring, "Alerting on log-based metrics": "You can create an alerting policy that notifies you when a log-based metric meets a specified condition." This directly supports the alerting aspect of option A as the simplest method.

3. Google Cloud Documentation, Cloud Audit Logs, "Compute Engine audit logging information": This page lists the audited methods for the Compute Engine API. It shows that actions like v1.compute.firewalls.insert and v1.compute.instances.insert are captured as Admin Activity audit logs, which are the source data for the solution in option A.

4. Google Cloud Documentation, VPC, "Firewall Rules Logging overview": "Firewall Rules Logging lets you audit, verify, and analyze the effects of your firewall rules. For example, you can determine if a firewall rule designed to deny traffic is functioning as intended." This clarifies that this feature logs traffic connections, not administrative changes to the rules themselves, making option C incorrect.

5. Google Cloud Documentation, Cloud Logging, "Overview of routing and storage": This document describes sink destinations. It positions BigQuery for "big data analysis" and Cloud Storage for "long-term, cost-effective storage," highlighting that these are not the primary tools for simple, real-time alerting, which makes option D less suitable than A.

Question 34

Your continuous integration and delivery (CI/CD) server can't execute Google Cloud actions in a specific project because of permission issues. You need to validate whether the used service account has the appropriate roles in the specific project. What should you do?
Options
A: Open the Google Cloud console, and run a query to determine which resources this service account can access.
B: Open the Google Cloud console, and run a query of the audit logs to find permission denied errors for this service account.
C: Open the Google Cloud console, and check the organization policies.
D: Open the Google Cloud console, and check the Identity and Access Management (IAM) roles assigned to the service account at the project or inherited from the folder or organization levels.
Show Answer
Correct Answer:
Open the Google Cloud console, and check the Identity and Access Management (IAM) roles assigned to the service account at the project or inherited from the folder or organization levels.
Explanation
The most direct and fundamental method to validate a service account's permissions is to inspect its Identity and Access Management (IAM) role bindings. The Google Cloud console's IAM page for the specific project displays all principals, including the service account in question, and the roles assigned to them. Crucially, this view also shows roles that are inherited from parent resources in the hierarchy, such as folders and the organization. This comprehensive view allows you to confirm if the service account has the necessary roles to perform its tasks, directly addressing the core of the permission issue.
Why Incorrect Options are Wrong

A. This describes using the Policy Analyzer. While it can determine effective permissions, the primary and most direct step to validate assigned roles is to view the IAM policy itself.

B. Audit logs confirm that permission errors have occurred, but they don't show the current role configuration. They are for post-event analysis, not for validating the current state of permissions.

C. Organization policies apply constraints on resource configurations (e.g., restricting locations), but they do not grant permissions. The issue described is a lack of permissions, which is managed by IAM roles.

---

References

1. Google Cloud Documentation, IAM - View access to a project, folder, or organization: "You can get a list of all IAM principals... who have IAM roles for a project... The list includes principals who have been granted roles on the project directly, and principals who have inherited roles from a folder or organization." This directly supports option D as the standard procedure for checking roles and inheritance.

2. Google Cloud Documentation, IAM - Policy inheritance: "When you grant a role to a user at a level in the resource hierarchy, they inherit the permissions from that role for all resources under that level... For example, if you grant a user the Project Editor role at the organization level, they can edit any project in the organization." This explains the inheritance concept mentioned in option D.

3. Google Cloud Documentation, Policy Intelligence - Policy Analyzer overview: "Policy Analyzer lets you find out what principals have access to which Google Cloud resources." This describes the functionality in option A, which is a powerful but secondary tool to directly viewing the IAM policy for role validation.

4. Google Cloud Documentation, Cloud Audit Logs - Overview: "Cloud Audit Logs maintains the following audit logs for each project, folder, and organization: Admin Activity audit logs, Data Access audit logs, System Event audit logs, and Policy Denied audit logs." This confirms that logs are for recording events, which is the focus of option B.

Question 35

Your company is moving its continuous integration and delivery (CI/CD) pipeline to Compute Engine instances. The pipeline will manage the entire cloud infrastructure through code. How can you ensure that the pipeline has appropriate permissions while your system is following security best practices?
Options
A: โ€ข Add a step for human approval to the CI/CD pipeline before the execution of the infrastructure provisioning. โ€ข Use the human approvals IAM account for the provisioning.
B: โ€ข Attach a single service account to the compute instances. โ€ข Add minimal rights to the service account. โ€ข Allow the service account to impersonate a Cloud Identity user with elevated permissions to create, update, or delete resources.
C: โ€ข Attach a single service account to the compute instances. โ€ข Add all required Identity and Access Management (IAM) permissions to this service account to create, update, or delete resources
D: โ€ข Create multiple service accounts, one for each pipeline with the appropriate minimal Identity and Access Management (IAM) permissions. โ€ข Use a secret manager service to store the key files of the service accounts. โ€ข Allow the CI/CD pipeline to request the appropriate secrets during the execution of the pipeline.
Show Answer
Correct Answer:
โ€ข Attach a single service account to the compute instances. โ€ข Add minimal rights to the service account. โ€ข Allow the service account to impersonate a Cloud Identity user with elevated permissions to create, update, or delete resources.
Explanation
This approach follows the principle of least privilege and secure credential management. By attaching a low-privilege service account to the Compute Engine instance, the instance's default operational state is secure. When the CI/CD pipeline needs to perform high-privilege actions like managing infrastructure, it uses its iam.serviceAccountTokenCreator permission to impersonate a separate, more powerful identity (a dedicated service account is the standard pattern). This provides temporary, just-in-time, audited access to elevated permissions, which is a significant security best practice. It avoids exposing long-lived, high-privilege credentials directly on the CI/CD worker instance.
Why Incorrect Options are Wrong

A. Using a human's account for an automated process is an anti-pattern. It breaks automation and creates security risks associated with user credentials being used programmatically.

C. Attaching a service account with all required permissions directly to the instance violates the principle of least privilege, making the instance a high-value target for attackers.

D. Creating and managing service account key files is strongly discouraged. It introduces the risk of key leakage and adds the overhead of key rotation and management.

References

1. Google Cloud Documentation, "Service account impersonation": "Impersonation lets a service account or user act as another service account... When you use impersonation, you give a principal (the one doing the impersonating) a limited and temporary permission to act as a service account with more privilege." This directly supports the mechanism described in option B.

2. Google Cloud Documentation, "Best practices for using service accounts", Section: "Use service account impersonation for temporary, elevated access": "By using impersonation, you can avoid elevating the principal's permissions permanently. This helps you enforce the principle of least privilege." This explicitly recommends the pattern in option B.

3. Google Cloud Documentation, "Best practices for working with service accounts", Section: "Avoid using service account keys": "Service account keys are a security risk if they are not managed correctly... We recommend that you use other, more secure ways to authenticate." This directly contradicts the approach in option D.

4. Google Cloud Security Foundations Guide, Page 33, Section: "4.2.2 Principle of least privilege": "Grant roles at the smallest scope possible... Grant roles that provide only the permissions required to perform a task." Option C violates this by granting all permissions upfront, while option B adheres to it by providing elevated permissions only when needed.

Shopping Cart
Scroll to Top

FLASH OFFER

Days
Hours
Minutes
Seconds

avail $6 DISCOUNT on YOUR PURCHASE