Study Smarter for the Associate Cloud Engineer Exam with Our Free and Reliable Cloud Engineer Exam Questions โ Updated for 2025.
At Cert Empire, we are focused on delivering the most accurate and up-to-date exam questions for students preparing for the Google Associate Cloud Engineer Exam. To make preparation easier, weโve made parts of our Associate Cloud Engineer exam resources free for everyone. You can practice as much as you like with Free Google Associate Cloud Engineer Practice Test.
Question 1
Show Answer
A. Enable Private Service Access on the Cloud Storage Bucket.
Private Service Access is used to connect your VPC to Google-managed services (like Cloud SQL) that reside in a separate, Google-owned VPC. It is not applicable for accessing global APIs like Cloud Storage.
B. Add storage.googleapis.com to the list of restricted services in a VPC Service Controls perimeter and add your project to the list to protected projects.
VPC Service Controls is a security feature to prevent data exfiltration by creating a service perimeter. It does not provide the network connectivity needed for an internal-only VM to reach the service in the first place.
D. Deploy a Cloud NAT instance and route the traffic to the dedicated IP address of the Cloud Storage bucket.
Cloud NAT is primarily used to provide instances without external IPs with outbound access to the public internet. This violates the stated security policy. Private Google Access is the specific feature for accessing Google APIs privately.
1. Google Cloud Documentation, "Private Google Access overview": "With Private Google Access, VMs that only have internal IP addresses (no external IP addresses) can reach the external IP addresses of Google APIs and services. [...] You can use Private Google Access to access the external IP addresses of most Google APIs and services, including Cloud Storage..." This directly supports option C.
2. Google Cloud Documentation, "Choose a Cloud NAT product": "Cloud NAT enables Google Cloud virtual machine (VM) instances without external IP addresses and GKE clusters to connect to the internet." This confirms that Cloud NAT is for internet access, making option D incorrect as it violates the policy.
3. Google Cloud Documentation, "Private Service Access": "Private service access is a private connection between your VPC network and a network in a Google or third-party service. [...] For example, you can use private service access to connect to Cloud SQL..." This shows that option A is for a different type of service connection.
4. Google Cloud Documentation, "VPC Service Controls overview": "VPC Service Controls helps you mitigate the risk of data exfiltration from your Google-managed services..." This confirms that VPC Service Controls (option B) is a security measure, not a connectivity solution for this scenario.
Question 2
Show Answer
B: This option is incomplete. While having the correct permissions (like Organization Administrator) is necessary for the move, it omits the explicit and critical step of changing the project's billing account.
C: This is an overly complex and incorrect approach. Private Catalog is for managing and deploying approved solutions, not for migrating existing production projects. This method would require recreating all resources, leading to significant effort and downtime.
D: This method involves recreating the entire project's infrastructure from code, which is the opposite of "minimal effort." It doesn't move the existing project but rather creates a new one, requiring a separate, complex data migration strategy and causing service disruption.
---
1. Official Google Cloud Documentation - Moving a project:
"When you move a project, its original billing account will continue to be used... To change the billing account, you must have the billing.projectManager role on the destination billing account and the resourcemanager.projectBillingManager role on the project." This confirms that moving the project and changing the billing account are two separate, required steps.
Source: Google Cloud Documentation, Resource Manager, "Moving a project", Section: "Effect on billing".
2. Official Google Cloud Documentation - gcloud projects move:
The command gcloud projects move is the command-line interface for the projects.move API method. The documentation outlines the process for moving a project to a new organization or folder.
Source: Google Cloud SDK Documentation, gcloud projects move.
3. Official Google Cloud Documentation - Modifying a project's billing account:
"You can change the billing account that is used to pay for a project." This page details the permissions and steps required to link a project to a different billing account, confirming it is a distinct action from moving the project's resource hierarchy.
Source: Google Cloud Billing Documentation, "Enable, disable, or change billing for a project".
Question 3
Show Answer
B. Identity and Access Management (IAM) policies grant permissions to principals (who can do what), but they do not enforce constraints on resource attributes like location (where).
C. This is incorrect for the same reason as B; IAM is the wrong tool for enforcing location-based restrictions. This is the specific purpose of the Organization Policy Service.
D. This is incorrect for two reasons: IAM does not control resource locations, and policies are bound to resources (like projects or folders), not to roles.
1. Google Cloud Documentation, "Restricting resource locations": "To restrict the locations where your organization's resources can be created, you can use a resource locations organization policy. The resource locations organization policy constraint is gcp.resourceLocations." This document explicitly states that the gcp.resourceLocations constraint is the correct tool.
2. Google Cloud Documentation, "Organization Policy Constraints", gcp.resourceLocations: "Defines the set of locations where location-based Google Cloud resources can be created... This constraint will be checked at resource creation time." This confirms the specific constraint and its function.
3. Google Cloud Documentation, "Resource hierarchy for access control": "The Google Cloud resource hierarchy allows you to group projects under folders, and folders and projects under the organization... Policies set at higher levels in the hierarchy are inherited by the resources below them." This supports using a folder to apply the policy to multiple projects efficiently.
4. Google Cloud Documentation, "Overview of IAM": "IAM lets you grant granular access to specific Google Cloud resources and helps prevent access to other resources... IAM lets you adopt the security principle of least privilege". This documentation clarifies that IAM's focus is on permissions, not resource configuration constraints like location.
Question 4
Show Answer
A. A CNAME record cannot be used for the zone apex (mydomain.com). Additionally, A records map hostnames to IP addresses, not to other hostnames.
B. A CNAME record cannot be used for the zone apex. AAAA records are for mapping to IPv6 addresses, not for aliasing one hostname to another.
D. NS (Name Server) records are used to delegate a DNS zone to a set of authoritative name servers, not to point a hostname to an IP address or another hostname.
1. Google Cloud Documentation, Cloud DNS, "Add, modify, and delete records": Under the section for "CNAME record," the documentation states, "A CNAME record cannot exist at the zone apex." This directly invalidates options A and B. The documentation also defines an "A record" as mapping a domain name to an IPv4 address, which is the correct use case for the apex domain in this scenario.
2. Google Cloud Documentation, Cloud DNS, "Supported DNS record types": This page details the function of each record type. It confirms that A records are for IPv4 addresses, CNAMEs are for canonical names (aliases), and NS records are for name server delegation, supporting the reasoning for selecting C and rejecting D.
3. Internet Engineering Task Force (IETF), RFC 1034, "DOMAIN NAMES - CONCEPTS AND FACILITIES", Section 3.6.2: This foundational document for DNS specifies the CNAME rule: "If a CNAME RR is present at a node, no other data should be present". Since the zone apex must have SOA and NS records, a CNAME cannot be placed there. This provides the technical basis for why options A and B are incorrect.
4. Internet Engineering Task Force (IETF), RFC 1912, "Common DNS Operational and Configuration Errors", Section 2.4: This document clarifies common mistakes and states, "A CNAME record is not allowed to coexist with any other data. In other words, if suzy.podunk.xx is an alias for sue.podunk.xx, you can't also have an MX record for suzy.podunk.xx." This reinforces the rule against using a CNAME at the zone apex.
Question 5
Show Answer
B: This option incorrectly describes an egress rule. The destination for an egress rule must be an IP CIDR range, not a target network tag. A target tag is used for ingress rules.
C: The firewall rule described is overly permissive. It allows traffic from all VPC IP addresses to the entire database subnet, which violates the requirement to only allow traffic from the application servers.
D: This option incorrectly describes an egress rule. The destination for an egress rule must be an IP CIDR range, not a target service account. A target service account is used for ingress rules.
1. Google Cloud Documentation - VPC firewall rules: "For ingress rules, you can use service accounts to define the source. For egress rules, you can use service accounts to define the destination... Using service accounts for the source of ingress rules and the destination of egress rules is more specific than using network tags." This supports using service accounts for source/target specification.
2. Google Cloud Documentation - Use firewall rules: Under the "Components of a firewall rule" section, the table for "Source for ingress rules" lists "Source service accounts". The table for "Destination for egress rules" lists only "Destination IPv4 or IPv6 ranges". This confirms that options B and D, which specify a target tag or service account for an egress rule's destination, are invalid configurations.
3. Google Cloud Documentation - Firewall rules overview: "You can configure firewall rules by using network tags or service accounts... If you need stricter control over rules, we recommend that you use service accounts instead of network tags." This highlights that the approach in option A is a recommended best practice for secure configurations.
Question 6
Show Answer
A. While the gcloud CLI can deploy Marketplace solutions, it is generally more complex and less intuitive than using the graphical user interface, contradicting the "quick and easy" requirement.
C. Using Terraform is an excellent practice for infrastructure-as-code and repeatable deployments, but it requires writing configuration files and is more involved than a direct deployment from the Marketplace UI.
D. A manual installation using the provider's guide is the most time-consuming and complex option. It requires manually provisioning infrastructure and handling all software dependencies and configurations.
1. Google Cloud Documentation, "Overview of Google Cloud Marketplace": The documentation states, "Google Cloud Marketplace lets you quickly deploy functional software packages that run on Google Cloud... Some solutions are free to use, and for others, you pay for the software, or for the Google Cloud resources that you use, or both." This confirms the Marketplace is the intended tool for quick deployments.
2. Google Cloud Documentation, "Deploying a VM-based solution": This guide details the process of deploying a solution directly from the Marketplace console. The steps involve selecting a product and filling out a simple web form, after which "Cloud Deployment Manager deploys the solution for you." This demonstrates the ease of use compared to CLI or manual methods.
3. Google Cloud Documentation, "Deploying a solution by using Terraform": This document outlines the multi-step process for using Terraform, which includes creating a .tf configuration file. This confirms that while possible, it is not the simplest or quickest method for a one-time deployment.
Question 7
Show Answer
B. This approach decentralizes billing, creating significant administrative overhead and making cost tracking nearly impossible. It also contradicts the goal of not using engineers' personal payment information.
C. Invoiced billing is a payment option, not the mechanism that enables project creation. A startup may not meet the eligibility criteria, and this option omits the crucial step of granting permissions.
D. A Purchase Order (PO) is a financial instrument used with invoiced billing for tracking purposes. It does not solve the core problem of granting engineers permission to use a central billing account.
1. Google Cloud Documentation, "Overview of Cloud Billing concepts": This document states, "A Cloud Billing account is set up in Google Cloud and is used to pay for usage costs in your Google Cloud projects... To use Google Cloud resources in a project, billing must be enabled on the project. Billing is enabled when the project is linked to an active Cloud Billing account." This supports the fundamental need for a central billing account linked to projects.
2. Google Cloud Documentation, "Control access to Cloud Billing accounts with IAM," Section: "Billing account permissions": This page details the permissions required to manage billing. Specifically, the billing.projects.link permission "allows a user to link projects to the billing account." This is the exact permission needed by the engineers in the scenario.
3. Google Cloud Documentation, "Understand predefined Cloud Billing IAM roles," Section: "Billing Account User": The roles/billing.user role is described as granting permissions to link projects to a billing account. This is the standard role assigned to users who need to create projects under a corporate billing account.
4. Google Cloud Documentation, "Request invoiced billing," Section: "Eligibility requirements": This document outlines the criteria for invoiced billing, which includes being a registered business for at least one year and having a minimum spend, confirming that a 6-month-old startup might not qualify.
Question 8
Show Answer
A. The cloudresourcemanager.googleapis.com API is for programmatically managing projects, folders, and organizations. It does not enable other services like Compute Engine or Cloud Storage.
C. Enabling all APIs is a significant security risk as it violates the principle of least privilege. It exposes the project to services that are not needed, increasing the potential attack surface.
D. The gcloud init command is used to initialize or configure settings for the gcloud command-line tool, such as the default project, account, and region. It does not enable any APIs.
1. Official Google Cloud Documentation, Enabling and disabling services: "Before you can use a Google Cloud service, you must first enable the service's API for your Google Cloud project... We recommend that you enable APIs for only the services that your apps actually use." This supports the principle of enabling specific APIs. The page also provides the syntax gcloud services enable SERVICENAME, which matches option B.
Source: Google Cloud Documentation, "Enabling and disabling services".
2. Official Google Cloud Documentation, gcloud services enable command reference: This document confirms that gcloud services enable [SERVICE]... is the correct command to enable one or more APIs for a project.
Source: Google Cloud SDK Documentation, gcloud services enable.
3. Official Google Cloud Security Foundations Guide, Section 2.3, "Manage IAM permissions": This guide emphasizes the principle of least privilege. While discussing IAM, the principle extends to all resources, including enabling only necessary APIs. "Grant roles at the smallest scope... grant predefined roles instead of primitive roles... to enforce the principle of least privilege."
Source: Google Cloud Security Foundations Guide PDF, Page 13.
4. Official Google Cloud Documentation, gcloud init command reference: This document describes the function of gcloud init as: "Initializes or reinitializes gcloud CLI settings." It makes no mention of enabling APIs, confirming that option D is incorrect.
Source: Google Cloud SDK Documentation, gcloud init.
Question 9
Show Answer
A. This introduces significant and unnecessary complexity by adding Active Directory. Migrating users and setting up synchronization and federation is a major project when a suitable identity provider is already in place.
C. This is technically incorrect. Google Workspace and Cloud Identity are part of an integrated identity platform; you do not federate between them. It also misapplies MFA to domain-wide delegation instead of user accounts.
D. Introducing a third-party identity provider adds complexity, cost, and another system to manage. Synchronizing from Google Workspace to a third-party provider is also an unconventional and illogical data flow.
1. Using Groups for Access Control: Google Cloud's official documentation on Identity and Access Management (IAM) explicitly recommends using groups to manage roles for multiple users. This simplifies administration and scales effectively.
Source: Google Cloud Documentation, "Best practices for using IAM", Section: "Use groups and roles to manage access".
2. Google Workspace and Cloud Identity Integration: Google Workspace accounts are inherently Cloud Identity accounts. This means there is no need for federation or a separate identity system.
Source: Google Cloud Documentation, "Overview of Cloud Identity and Access Management", Section: "Identities".
3. Enforcing Multi-Factor Authentication (MFA): The Google Workspace Admin help center details how to enforce 2-Step Verification (Google's term for MFA) for all users in an organization to enhance security.
Source: Google Workspace Admin Help, "Protect your business with 2-Step Verification", Section: "Deploy 2-Step Verification".
4. Complexity of Federation: Setting up federation with an external identity provider (as suggested in A and D) is a multi-step process intended for organizations that already have an established external IdP as their source of truth, not for those already using Google Workspace.
Source: Google Cloud Documentation, "Setting up identity federation", provides an overview of the required configuration, highlighting the added complexity.
Question 10
Show Answer
A. A Managed Instance Group runs on Compute Engine virtual machines. This requires managing the underlying OS and instance configurations, which violates the "do not want to manage the infrastructure" requirement.
B. Google Kubernetes Engine (GKE) in Standard mode requires you to manage the worker node pools (the underlying VMs). This includes tasks like node upgrades and capacity planning, which constitutes infrastructure management.
C. Cloud Storage is designed for object storage, not as a primary repository for Docker images. Furthermore, GKE in Standard mode requires infrastructure management, making this option incorrect on two counts.
1. Google Cloud Documentation, "Cloud Run overview": "Cloud Run is a managed compute platform that lets you run containers directly on top of Google's scalable infrastructure. You can deploy code written in any programming language on Cloud Run if you can build a container image from it. ... With Cloud Run, you don't need to manage infrastructure..."
2. Google Cloud Documentation, "Choosing a compute option": This document compares various compute services. It categorizes Cloud Run as "Serverless" and highlights "No infrastructure management." In contrast, it places Compute Engine (used in option A) under "Infrastructure as a Service (IaaS)" and GKE (used in options B and C) under "Containers as a Service (CaaS)," both of which involve more infrastructure management than serverless options.
3. Google Cloud Documentation, "Artifact Registry overview": "Artifact Registry is a single place for your organization to manage container images and language packages (such as Maven and npm). It is fully integrated with Google Cloud's tooling and runtimes..." This confirms Artifact Registry as the correct repository for Docker images.
4. Google Cloud Documentation, "Comparing GKE cluster modes: Autopilot and Standard": "In Standard mode, you manage your cluster's underlying infrastructure, which gives you node configuration flexibility." This statement confirms that GKE Standard mode involves infrastructure management, which the question explicitly seeks to avoid.
Question 11
Show Answer
A. Attaching a public IP and opening port 22 to the internet is highly insecure. It exposes the instance to constant brute-force attacks and automated scans from malicious actors.
B. Using a third-party tool introduces additional costs, potential security vulnerabilities, and management overhead. It is less integrated and secure than a native Google Cloud solution like IAP.
D. A bastion host is a valid security pattern but is not the most cost-efficient, as it requires running and maintaining an additional VM. It also has more operational overhead than the fully managed IAP service.
1. Google Cloud Documentation - Use IAP for TCP forwarding: "Using IAP's TCP forwarding feature... you can control who can access administrative services like SSH and RDP on your backends from the public internet. This removes the need to run a bastion host..." This document also specifies the required firewall rule: "Create a firewall rule that allows ingress traffic from IAP's TCP forwarding IP range, 35.235.240.0/20, to the ports on your instances."
Source: Google Cloud Documentation, "Use IAP for TCP forwarding", Section: "When to use IAP TCP forwarding" and "Create a firewall rule".
2. Google Cloud Documentation - Securely connecting to VM instances: This guide compares connection methods and highlights the benefits of IAP. "IAP lets you establish a central authorization layer for applications that are accessed by HTTPS, so you can use an application-level access control model instead of relying on network-level firewalls." It positions IAP as a superior alternative to bastion hosts for securing access.
Source: Google Cloud Architecture Center, "Securely connecting to VM instances", Section: "Identity-Aware Proxy".
3. Google Cloud Documentation - Choose a connection option: This document explicitly contrasts IAP with bastion hosts. For IAP, it states you can connect "without requiring your VM instances to have external IP addresses." For bastion hosts, it notes the requirement to "provision and maintain an additional instance," which implies both cost and management overhead.
Source: Google Cloud Documentation, Compute Engine, "Choose a connection option".
Question 12
Show Answer
A. Zonal persistent disks are confined to a single zone. If that zone fails, the disk becomes inaccessible. Restoring from a snapshot introduces latency and potential data loss (RPO), failing the "immediately available" requirement.
B. A zonal persistent disk cannot be attached to a VM in a different zone. This action is technically impossible, making it an invalid solution for a zonal failure.
C. While using the correct disk type (regional), the recovery process is flawed. Restoring from a snapshot is unnecessary and slow; the primary benefit of a regional disk is its immediate availability in the failover zone.
1. Google Cloud Documentation, "High availability with regional persistent disks": "Regional persistent disks provide synchronous replication of data between two zones in a region. ... If your primary zone becomes unavailable, you can fail over to your secondary zone. In the secondary zone, you can force-attach the regional persistent disk to a new VM instance." (See section: "Failing over your regional persistent disk").
2. Google Cloud Documentation, "About persistent disk snapshots": "Snapshots are global resources... Use snapshots to protect your data from unexpected failures... Creating a new persistent disk from a snapshot takes time..." This highlights that snapshots are for disaster recovery/backups, not instantaneous high-availability failover.
3. Google Cloud Documentation, "Persistent disk types": "Zonal persistent disks are located in a single zone. If a zone becomes unavailable, all zonal persistent disks in that zone are unavailable until the zone is restored." This confirms that options A and B are unsuitable for zonal failure scenarios.
Question 13
Show Answer
B. This option misrepresents how IAM works. You do not grant permissions to a policy; a policy binds members to roles. The permission string is also invalid.
C. A predefined role (roles/compute.admin) already exists for this purpose, which is preferred over creating a custom role. Applying it at the folder level is also incorrect scope.
D. The roles/editor basic role grants broad permissions to create and update all resources in the project, directly violating the requirement to restrict modification permissions to Compute Engine.
1. Google Cloud Documentation - IAM Basic and predefined roles reference: This document explicitly states that the Editor role (roles/editor) grants "permissions to create, modify, and delete all resources." It also describes the Compute Admin role (roles/compute.admin) as providing "Full control of all Compute Engine resources." This supports why option D is wrong and option A is correct. (See "Basic roles" and "Compute Engine roles" sections).
2. Google Cloud Documentation - IAM Overview, Principle of least privilege: "When you grant roles, grant the least permissive role that's required. For example, if a user only needs to view resources in a project, grant them the Viewer role, not the Owner role." This principle supports choosing the specific roles/compute.admin over the broad roles/editor.
3. Google Cloud Documentation - Understanding roles: "Predefined roles are created and maintained by Google... Google automatically updates their permissions as new features and services are added to Google Cloud. When possible, we recommend that you use predefined roles instead of custom roles." This supports the choice of a predefined role (as in option A) over a custom one (as in option C).
Question 14
Show Answer
B: Cloud Functions is less suitable for running complex, containerized web services than Cloud Run. More importantly, it is unrealistic to assume the same on-premises configurations will work in the cloud without any changes.
C: This approach discards the significant advantage of having pre-built Docker containers. Refactoring the codebase for Cloud Functions would require unnecessary development effort compared to deploying the existing containers to Cloud Run.
D: While selecting Cloud Run is correct, this option is flawed because it incorrectly states that the same on-premises configurations can be used. Migrating from on-premises to the cloud always requires configuration updates.
1. Google Cloud Documentation, "What is Cloud Run?": "Cloud Run is a managed compute platform that lets you run containers directly on top of Google's scalable infrastructure. You can deploy code written in any programming language if you can build a container image from it. In fact, building container images is optional. If you're using Go, Node.js, Python, Java, .NET Core, or Ruby, you can use the source-based deployment option that builds the container for you." This confirms Cloud Run is the ideal platform for existing container images.
2. Google Cloud Documentation, "Choosing a serverless option": This document compares serverless options. It states, "Cloud Run is also a good choice if you want to migrate a containerized application from on-premises or from another cloud." This directly supports the scenario in the question. For Cloud Functions, it is positioned for "event-driven" applications, which is a less direct fit for a set of web microservices than Cloud Run.
3. Google Cloud Documentation, "Deploying container images": When deploying to Cloud Run, the documentation for the gcloud run deploy command shows flags like --set-env-vars for setting environment variables. This confirms that configurations are expected to be set or updated during deployment to the new cloud environment, invalidating the claims in options B and D that on-premises configurations can be used as-is.
4. Google Cloud Documentation, "Cloud Run common use cases": The documentation lists "APIs and microservices" as a primary use case, stating, "Quickly deploy microservices and scale them as needed without having to manage a Kubernetes cluster." This reinforces Cloud Run as the correct choice for a microservices-based application.
Question 15
Show Answer
A. Using HTTP/2 optimizes network transport for loading multiple assets but does not solve the server-side latency caused by provisioning a new container instance.
B. Setting concurrency to 1 would likely increase the frequency of cold starts, as more instances would be needed to handle the same amount of traffic.
C. Setting a maximum number of instances is a cost-control and safety feature to limit scaling; it does not keep instances warm to prevent cold starts.
1. Google Cloud Documentation, "Configuring minimum instances": "To reduce latency and cold starts, you can configure Cloud Run to keep a minimum number of container instances running and ready to serve requests... When a request comes in that needs to be served by a new instance, Cloud Run starts a new instance. This is known as a cold start. Cold starts can result in high latency for the requests that require them."
2. Google Cloud Documentation, "General development tips for Cloud Run", Section: "Minimizing cold starts": "To permanently keep instances warm, use the min-instances setting. If you set the value of min-instances to 1 or greater, a specified number of container instances are kept running and ready to serve requests, which reduces cold starts."
3. Google Cloud Documentation, "Container instance concurrency": "By default, each Cloud Run container instance can receive up to 80 requests at the same time... You may want to limit concurrency for instances where... your code cannot handle parallel requests." (This shows concurrency is not primarily for reducing initial latency).
Question 16
Show Answer
B. Using gcloud CLI to delete the topic is an imperative action that bypasses Config Connector. The controller will detect the drift and recreate the Pub/Sub topic to match the existing Kubernetes manifest.
C. The label deleted-by-cnrm is not a recognized mechanism within Config Connector for triggering resource deletion. Deletion is managed by removing the Kubernetes resource object itself, not by applying a specific label.
D. The managed-by-cnrm label is used by Config Connector to identify resources under its management. Changing this label via gcloud will be reverted by the controller and does not trigger resource deletion.
1. Google Cloud Documentation - Deleting resources with Config Connector: "To delete a resource, delete the resource from your cluster. Config Connector deletes the Google Cloud resource. You can delete a resource with kubectl delete."
Source: Google Cloud Documentation, "Managing your resources with Config Connector", Section: "Deleting resources".
URL: https://cloud.google.com/config-connector/docs/how-to/managing-resources#deletingresources
2. Google Cloud Documentation - Preventing resource deletion: "By default, when you delete a Config Connector resource, Config Connector deletes the corresponding Google Cloud resource. This behavior can be changed with an annotation." This confirms that kubectl delete is the trigger for deletion by default.
Source: Google Cloud Documentation, "Managing your resources with Config Connector", Section: "Preventing resource deletion".
URL: https://cloud.google.com/config-connector/docs/how-to/managing-resources#preventingresourcedeletion
3. Google Cloud Documentation - Config Connector Overview: "Config Connector is a Kubernetes addon that allows you to manage Google Cloud resources through Kubernetes configuration... Config Connector's controller reconciles your cluster's state with Google Cloud, creating, updating, or deleting Google Cloud resources as you apply configurations to your cluster." This explains the reconciliation behavior that makes option B incorrect.
Source: Google Cloud Documentation, "Config Connector overview".
URL: https://cloud.google.com/config-connector/docs/overview
Question 17
Show Answer
B. Deploy a public autopilot cluster.
A public cluster would assign external IP addresses to nodes, violating the requirement that nodes cannot be accessed from the internet.
C. Deploy a standard public cluster and enable shielded nodes.
This option is incorrect on two counts: a public cluster violates the network access requirement, and a standard cluster increases operational cost compared to Autopilot.
D. Deploy a standard private cluster and enable shielded nodes.
While this meets the security and networking requirements, a standard cluster requires manual node pool management, which does not reduce operational cost as effectively as Autopilot.
1. Google Cloud Documentation: GKE Autopilot overview: "Autopilot is a mode of operation in Google Kubernetes Engine (GKE) in which Google manages your cluster configuration... Autopilot clusters enable many GKE security features and best practices by default... Autopilot clusters always use Shielded GKE Nodes." This supports the reduced operational cost and verifiable integrity requirements.
2. Google Cloud Documentation: About private clusters: "In a private cluster, nodes only have internal IP addresses, which means that the nodes and Pods are isolated from the internet by default." This supports the requirement that nodes cannot be accessed from the internet.
3. Google Cloud Documentation: Shielded GKE Nodes: "Shielded GKE Nodes provide strong, verifiable node identity and integrity to increase the security of your Google Kubernetes Engine (GKE) nodes..." This directly addresses the "verifiable node identity and integrity" requirement.
Question 18
Show Answer
A. This role is too minimal. While it contains the exact list permissions, it lacks ancillary permissions (e.g., resourcemanager.projects.get) that are often required to get project context, making the role non-functional in many tools.
C. The Compute Storage Admin role is excessively permissive. It grants full administrative control, including creation and deletion of disks and images, which strongly violates the principle of least privilege.
D. Starting with a highly privileged role like Compute Storage Admin and removing permissions is not a recommended practice. It is complex, error-prone, and increases the risk of unintentionally granting excessive access.
1. Google Cloud IAM Documentation, "Best practices for using IAM": This document emphasizes the principle of least privilege. It states, "When you grant roles, grant the least permissive roles that are needed." Option B follows this by starting with a limited role and adding only what is necessary.
Source: Google Cloud Documentation > IAM > Best practices for using IAM > "Use the principle of least privilege".
2. Google Cloud IAM Documentation, "Creating and managing custom roles": This guide explains the process of creating custom roles. It mentions, "To create a custom role, you can combine one or more of the available IAM permissions. ... Alternatively, you can copy an existing role and edit its permissions." This validates the method described in option B.
Source: Google Cloud Documentation > IAM > Roles and permissions > Creating and managing custom roles.
3. Google Cloud IAM Documentation, "IAM predefined roles": This reference details the permissions included in predefined roles. The Compute Image User role (roles/compute.imageUser) includes compute.images.list and compute.projects.get, confirming it's a suitable base. The Compute Storage Admin role (roles/compute.storageAdmin) includes permissions like compute.disks.create, confirming it is too permissive for the task.
Source: Google Cloud Documentation > IAM > Roles and permissions > Predefined roles > "Compute Engine roles".
Question 19
Show Answer
A. Using Compute Engine for the background job violates the core requirement to use serverless solutions and introduces unnecessary operational overhead and potentially higher costs compared to a pay-per-use service.
C. Cloud Storage is designed to host static websites (HTML, CSS, JavaScript). A Flask application is dynamic and requires a server-side processing environment, which Cloud Storage does not provide.
D. This option is incorrect for two reasons: it improperly suggests Cloud Storage for a dynamic Flask application and uses the non-serverless Compute Engine for the background job.
1. Google Cloud Documentation, "Choosing a computing option": This guide compares compute services. It recommends App Engine for "web applications" and Cloud Run for "web services and APIs." This supports using App Engine for the Flask app and Cloud Run for the API. (See the comparison table under the "Serverless" section).
2. Google Cloud Documentation, "App Engine, Python 3 runtime environment": This page explicitly mentions that the Python runtime is designed to run web servers and supports common frameworks like Flask, confirming its suitability for the web application. (See the "Web frameworks" section).
3. Google Cloud Documentation, "Cloud Run, Request timeouts": "For Cloud Run services, the maximum request timeout is 60 minutes." This documentation confirms that Cloud Run can handle the "long-running background job" requirement within a serverless model, making it a superior choice to a persistent VM. (See the "Setting and updating request timeouts" section).
4. Google Cloud Documentation, "Hosting a static website": "For a dynamic website, consider a compute option to host your site's server-side logic, such as Cloud Run or App Engine." This source directly contrasts static hosting on Cloud Storage with dynamic hosting on services like App Engine, invalidating options C and D.
Question 20
Show Answer
B. Storage Transfer Service is used for bulk data transfers from other sources into Cloud Storage, not for processing real-time data streams from Pub/Sub.
C. Dataflow is a processing engine, not an ingestion service. While it can receive data, Pub/Sub should be used as the ingestion buffer. Storage Transfer Service is also used incorrectly here.
D. Dataprep is an interactive tool for visual data preparation, not for automated, high-throughput stream processing. Bigtable is a NoSQL database, whereas Cloud Storage is the standard for a data lake.
1. Google Cloud Documentation, "IoT architecture overview": This document outlines common IoT architectures. The "Data processing and analytics" section describes the pattern of using Pub/Sub for ingestion, followed by Dataflow for processing, and then loading data into storage services like Cloud Storage or BigQuery. This directly supports the Pub/Sub -> Dataflow -> Storage flow.
2. Google Cloud Documentation, "Build a data lake on Google Cloud": This guide explicitly states, "Cloud Storage is a scalable, fully-managed, and cost-effective object store for all of your data, including unstructured data. For this reason, Cloud Storage is the ideal service to use as the central storage repository for your data lake." This confirms Cloud Storage as the correct destination.
3. Google Cloud Documentation, "Dataflow overview": The documentation highlights Dataflow's capability to "process data from a variety of sources, such as Pub/Sub and Cloud Storage." It positions Dataflow as the processing layer between ingestion (Pub/Sub) and storage.
4. Google Cloud Documentation, "Storage Transfer Service overview": This page clarifies the service's purpose: "Storage Transfer Service lets you move large amounts of data...to a Cloud Storage bucket." This confirms it is not a tool for processing streaming data from Pub/Sub.
Question 21
Show Answer
A. Using gcloud config set stores the password in plain text in the user's configuration file, which is insecure and precisely what the user wants to avoid.
B. This is incorrect because sha256 is a one-way hashing algorithm, not an encoding for authentication, and the core/customcacertsfile property is for specifying custom SSL certificates, not proxy credentials.
C. The environment variable names are incorrect (CLOUDSDKUSERNAME vs. CLOUDSDKPROXYUSERNAME), and placing credentials directly in a configuration file is an insecure practice.
1. Google Cloud SDK Documentation, gcloud topic configurations: Under the "Available Properties" section for proxy/password, the documentation states: "For security reasons, it is recommended to use the CLOUDSDKPROXYPASSWORD environment variable to set the proxy password instead of this property." This directly supports using environment variables for security.
2. Google Cloud SDK Documentation, gcloud config set: The documentation for this command includes a note: "The values of properties that you set with gcloud config set are stored in your user config file... For this reason, you should not use gcloud config set to set properties that contain secrets, such as proxy passwords." This explicitly advises against the method described in option A.
3. Google Cloud SDK Documentation, Installing the Google Cloud CLI, "Configuring the gcloud CLI" section: This section details the use of environment variables for configuration, mentioning that they override values set in configuration files. The CLOUDSDKPROXYPASSWORD variable is listed as the method for providing a proxy password.
Question 22
Show Answer
A. This approach fails to optimize costs because it exclusively uses more expensive standard VMs. A Kubernetes label alone does not enable the use of Spot VMs.
B. This configuration jeopardizes the availability of critical applications by running them on a node pool composed entirely of Spot VMs, which can be preempted at any time.
C. This option incorrectly reverses the deployment logic. It places critical deployments on unstable Spot VMs and fault-tolerant deployments on expensive standard VMs, failing both availability and cost-optimization goals.
1. Google Cloud Documentation, GKE Concepts, "Spot VMs": "Spot VMs are a good fit for running stateless, batch, or fault-tolerant workloads. They are not recommended for workloads that are not fault-tolerant, such as stateful applications." This directly supports placing fault-tolerant workloads on Spot VMs and implies critical workloads should be elsewhere.
2. Google Cloud Documentation, GKE Concepts, "About node pools": "Node pools are a subset of nodes within a cluster that all have the same configuration... For example, you might create a node pool in your cluster with Spot VMs for running fault-tolerant workloads and another node pool with standard VMs for workloads that require higher availability." This describes the exact architecture proposed in the correct answer.
3. Google Cloud Documentation, GKE How-to guides, "Separate workloads in GKE": This guide explains the use of node taints and tolerations to control scheduling. To implement the solution, you would add a taint to the Spot VM node pool (e.g., cloud.google.com/gke-spot=true:NoSchedule) and add a corresponding toleration only to the fault-tolerant deployments, ensuring they are scheduled there while critical workloads are placed on the standard node pool.
Question 23
Show Answer
B. Google Kubernetes Engine (GKE) is a container orchestration service, not a serverless platform in its standard mode. It requires management of the underlying cluster infrastructure.
C. While Cloud Functions is serverless, managing traffic by changing the function's name is not a supported or effective method for splitting production traffic between versions.
D. In App Engine, traffic is split between different versions within a single service. Creating a new service for each new version is an incorrect approach for canary testing.
1. Google Cloud Documentation - Cloud Run, "Rollbacks, gradual rollouts, and traffic migration": "Gradual rollouts (also sometimes called canary deployments) are a feature that allow you to slowly migrate traffic to a new revision... Cloud Run allows you to split traffic between multiple revisions." This document explicitly details the feature requested in the question.
2. Google Cloud Documentation - "Choosing a serverless option": This page categorizes Cloud Run, Cloud Functions, and App Engine as Google's core serverless compute platforms, which confirms that GKE (Option B) is not the intended serverless solution for this question.
3. Google Cloud Documentation - App Engine, "Splitting Traffic": "After you deploy two or more versions to a service, you can split traffic between them... For example, you can route 5% of traffic to a new version to test it in a production environment." This confirms that traffic splitting in App Engine happens between versions, not by creating new services (Option D).
Question 24
Show Answer
A. The Ops Agent is used for collecting telemetry (metrics and logs) for Cloud Monitoring and Cloud Logging, not for generating OS vulnerability reports.
B. The Ops Agent is the incorrect agent for vulnerability management. The OS Config agent is required for the VM Manager service to function.
D. While the OS Config agent is correct, routing this data via a log sink to BigQuery is an unnecessarily complex and non-standard method for viewing vulnerability reports.
1. Google Cloud Documentation, VM Manager Overview: "VM Manager is a suite of tools that can be used to manage operating systems for large virtual machine (VM) fleets running Windows and Linux on Compute Engine. VM Manager helps drive efficiency by automating and reducing the manual effort of otherwise time-consuming tasks... To use VM Manager, you need to set up VM Manager, which includes installing the OS Config agent". This confirms the OS Config agent is the correct component.
2. Google Cloud Documentation, Viewing vulnerability reports: "To view vulnerability reports for a project, you must have the osconfig.vulnerabilityReports.get or osconfig.vulnerabilityReports.list IAM permission." This section details the process which relies on the OS Config agent and appropriate permissions.
3. Google Cloud Documentation, OS Config, Control access: "Vulnerability Report Viewer (roles/osconfig.vulnerabilityViewer): Provides read-only access to vulnerability report data." This document explicitly defines the role and its purpose, matching the requirements of the question.
Question 25
Show Answer
A. Dataflow requires writing code using the Apache Beam SDK, which violates a core requirement. Additionally, gcloud storage is not the recommended tool for a 200 TB migration due to potential issues with network reliability and performance over long periods.
C. While Storage Transfer Service can be used for on-premises transfers, Transfer Appliance (Option B) is the more suitable and commonly recommended solution for a 200 TB dataset, as an online transfer of this magnitude is often impractical without a dedicated high-speed connection.
D. Cloud Data Fusion is an ETL/ELT data integration service, not a primary tool for bulk file migration. Dataflow violates the no-code requirement.
1. Transfer Appliance for 200 TB: Google Cloud's official documentation on data transfer options explicitly recommends Transfer Appliance for transfers from on-premises locations involving more than 20 TB of data, especially when the transfer would take more than one week online.
Source: Google Cloud Documentation, "Choosing a transfer option", section "Data from an on-premises location".
2. BigQuery Data Transfer Service for Redshift: The official documentation identifies this service as the primary method for migrating from Amazon Redshift.
Source: Google Cloud Documentation, "BigQuery Data Transfer Service", section "Amazon Redshift transfers".
3. Storage Transfer Service for S3: The documentation recommends Storage Transfer Service for large-scale data transfers from other cloud providers like Amazon S3.
Source: Google Cloud Documentation, "Storage Transfer Service", section "Overview".
4. Dataflow vs. No-Code: The Dataflow documentation clearly states that development involves using the Apache Beam SDKs, which is a coding activity.
Source: Google Cloud Documentation, "Dataflow", section "Develop".
Question 26
Show Answer
A. While you can recreate an instance from the Google Cloud console, the terminology "Replace VMs" is imprecise. The correct action in the console for a specific instance is labeled "Recreate," making this option less accurate than option B.
C. The gcloud compute instances update command is used for standalone VM instances, not for instances that are part of a managed instance group. Attempting to modify a MIG-managed instance directly is incorrect.
D. Updating and applying the instance template of the MIG initiates a rolling update across the entire group. This is a much larger operation intended for deploying new configurations, not for quickly fixing a single failed VM.
1. Official Google Cloud Documentation - Manually recreating instances in a MIG: "You can selectively recreate one or more instances in a managed instance group (MIG). Recreating an instance deletes the existing instance and creates a new one with the same name... Use the gcloud compute instance-groups managed recreate-instances command." This document explicitly states that recreate-instances is the command for this task.
2. Official Google Cloud SDK Documentation - gcloud compute instance-groups managed recreate-instances: The command reference details its usage: "schedules a recreation of one or more virtual machine instances in a managed instance group." This confirms it is the specific tool for the described scenario.
3. Official Google Cloud Documentation - Applying new configurations to VMs in a MIG: "To apply a new configuration to existing VMs in a MIG, you can... set up a rolling update." This shows that updating the template (Option D) is for configuration changes, not for fixing a single failed instance.
4. Official Google Cloud SDK Documentation - gcloud compute instances update: The documentation for this command focuses on updating metadata, machine type, and other properties of a standalone instance, confirming it is not the correct tool for managing instances within a MIG.
Question 27
Show Answer
A. The rapid release channel is not recommended for business-critical production workloads as it may contain bugs and receives less testing, prioritizing new features over stability.
C. A zonal cluster is a single point of failure. If its zone experiences an outage, the entire cluster becomes unavailable, which is unacceptable for a business-critical application.
D. While a regional cluster is correct for reliability, the rapid release channel is inappropriate for a production environment that must be optimized for stability.
1. GKE Autopilot Overview: "Autopilot clusters are regional, which means the control plane and nodes are spread across multiple zones in a region. This provides higher availability than zonal clusters."
Source: Google Cloud Documentation, "Autopilot overview".
2. Release Channels: "The Stable channel...is recommended for production clusters that require the highest level of stability...Updates on this channel have passed all internal Google Cloud testing and have been qualified for production."
Source: Google Cloud Documentation, "Release channels".
3. Cluster Availability (Regional vs. Zonal): "Regional clusters increase the availability of your applications by replicating the control plane and nodes across multiple zones in a region...For production workloads, we recommend regional clusters."
Source: Google Cloud Documentation, "Regional clusters".
4. GKE Best Practices: "Use release channels to balance stability and features...For production clusters, we recommend the Stable or Regular channel." and "For production workloads, use regional clusters for higher availability."
Source: Google Cloud Documentation, "Best practices for running cost-optimized Kubernetes applications on GKE", sections "Use release channels to balance stability and features" and "Use regional clusters for higher availability".
Question 28
Show Answer
A. Copying a custom role to every project is a manual, repetitive task that does not scale and violates the "minimal effort" principle. It creates management overhead for each new project.
B. The predefined Compute Admin role only grants permissions for Compute Engine. It does not include the required permissions for Cloud Functions or Cloud SQL, thus failing to meet the policy.
C. Assigning multiple roles to the group for each project is inefficient. This requires repetitive configuration for every project and is prone to inconsistencies, contradicting the "minimal effort" requirement.
1. Organization-level Custom Roles: Google Cloud documentation states, "If you want to create a custom role that can be used to grant access to resources in any project in your organization, you can create the role at the organization level." This supports creating the role once for all projects.
Source: Google Cloud Documentation, "IAM custom roles," section "Custom role availability."
2. Using Groups for Role Management: Google Cloud's best practices recommend using groups to manage principals. "We recommend that you grant roles to groups instead of individual users... you can adjust group membership, and the policy bindings update automatically. This practice makes policy management simpler and less error-prone..."
Source: Google Cloud Documentation, "Policy Troubleshooter overview," section "Best practices for using Identity and Access Management."
3. IAM Inheritance: Policies are inherited down the resource hierarchy. "When you set a policy at a high level in the resource hierarchy, such as the organization or folder level, the access grant is inherited by all resources under it." Assigning the role to the group at the organization level ensures it applies to all projects.
Source: Google Cloud Documentation, "Understanding the resource hierarchy," section "IAM policy inheritance."
4. Predefined Role Limitations: The official documentation for the Compute Admin role (roles/compute.admin) lists permissions related only to Compute Engine resources, confirming it does not grant access to Cloud Functions or Cloud SQL.
Source: Google Cloud Documentation, "IAM basic and predefined roles reference," section "Compute Engine roles."
Question 29
You used the gcloud container clusters command to create two Google Cloud Kubernetes (GKE) clusters prod-cluster and dev-cluster. โข prod-cluster is a standard cluster. โข dev-cluster is an auto-pilot duster. When you run the Kubect1 get nodes command, you only see the nodes from prod-cluster Which commands should you run to check the node status for dev-cluster?
Show Answer
A: The gcloud config set commands only modify the default properties for the gcloud tool itself; they do not alter the active kubectl context.
B: This command sets the default cluster for subsequent gcloud container commands but does not update the kubeconfig file used by kubectl.
D: This command is syntactically incomplete. gcloud container clusters get-credentials requires a location flag (--zone or --region) unless a default is already set in the gcloud configuration.
1. Google Cloud Documentation, "Configuring cluster access for kubectl": This guide explicitly states, "To configure kubectl to point to a GKE cluster, use the gcloud container clusters get-credentials command." It provides the command syntax, which includes the cluster name and a location flag.
Source: Google Cloud, GKE Documentation, How-to guides, "Configuring cluster access for kubectl". Section: "Generate a kubeconfig entry".
2. Google Cloud SDK Documentation, gcloud container clusters get-credentials: The official reference for the command confirms its function: "gcloud container clusters get-credentials updates a kubeconfig file with credentials and endpoint information for a cluster in GKE." The documentation also lists CLUSTERNAME and a location flag (--region or --zone) as required arguments.
Source: Google Cloud SDK Documentation, gcloud container clusters get-credentials.
3. Google Cloud SDK Documentation, gcloud config set: This documentation clarifies that gcloud config set is used to "Set a property in your active configuration" for the gcloud command-line tool, such as compute/zone or core/project. It does not mention any interaction with or modification of the kubectl configuration.
Source: Google Cloud SDK Documentation, gcloud config set.
Question 30
Show Answer
A: Cloud Monitoring is for performance metrics and infrastructure health, not for detailed audit logging of user and data activities. It cannot capture the specifics of read/write operations as required.
C: Bigtable is a fully managed Google Cloud service. Users do not have access to the underlying infrastructure, making it impossible to install an Ops Agent or any other software on the nodes.
D: This option is incomplete. Enabling only Admin Write logs fails to capture the required Data Read, Data Write, and Admin Read operations, which are critical for auditing access to PII data.
1. Cloud Audit Logs Overview: The official documentation specifies the different types of audit logs. "Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data." This directly maps to the question's requirement.
Source: Google Cloud Documentation, "Cloud Audit Logs overview", Section: "Types of audit logs".
2. Configuring Data Access Audit Logs: This guide details the process of enabling Data Access audit logs, which are typically disabled by default. The steps involve using the Google Cloud console's IAM & Admin > Audit Logs page to enable the required log types (Data Read, Data Write) for the specific service (Bigtable).
Source: Google Cloud Documentation, "Configure Data Access audit logs", Section: "Configuring with the Google Cloud console".
3. Exporting Logs with Sinks: The documentation outlines sinks as the primary method for routing log entries to supported destinations, including Pub/Sub. "To export log entries from Logging, you create a sink... The sink includes a destination and a filter that selects the log entries to export." This supports using a sink to send logs to a Pub/Sub topic for SIEM integration.
Source: Google Cloud Documentation, "Overview of log exports", Section: "Sinks".
4. Bigtable as a Managed Service: The Bigtable product overview describes it as a "fully managed, scalable NoSQL database service." This classification as "fully managed" implies that Google handles the underlying infrastructure, and users cannot install agents or access the host machines.
Source: Google Cloud Documentation, "Cloud Bigtable overview".
Question 31
Show Answer
A. Cloud Functions would require refactoring the binaries into a function, which contradicts the "minimal effort" requirement. Functions are better suited for event-driven, lighter-weight code, not long-running binary processes.
C. Google Kubernetes Engine (GKE) introduces significant operational overhead and cost. You would need to manage a cluster and pay for the underlying nodes, which is not cost-effective for a job running only 45 minutes daily.
D. A Compute Engine VM is a viable "lift and shift" option, but it is less cost-effective. Even with an instance schedule, you incur costs for the persistent disk 24/7, making it more expensive than Cloud Run's per-second billing model.
1. Google Cloud Documentation - Cloud Run Jobs: "Jobs are used to run code that performs a set of tasks and then quits when the work is done. This is in contrast to services, which run continuously to respond to web requests. Jobs are ideal for tasks like database migrations, or other batch processing jobs." This directly aligns with the use case in the question.
Source: Google Cloud Documentation, "What is Cloud Run", Section: "Services and jobs".
2. Google Cloud Documentation - Choosing a compute option: This guide helps select the right service. For containerized applications that are batch jobs, Cloud Run is recommended for its simplicity and cost-effectiveness over GKE, which is suited for complex microservices orchestration.
Source: Google Cloud Architecture Center, "Choosing the right compute option: a guide to Google Cloud products".
3. Google Cloud Documentation - Cloud Run Pricing: "Cloud Run jobs pricing is based on the resources your job uses... You are billed for the CPU and memory allocated to your job's tasks, with per-second granularity." This supports the "minimal cost" argument.
Source: Google Cloud Documentation, "Cloud Run pricing", Section: "Jobs pricing".
4. Google Cloud Documentation - Compute Engine Pricing: "Persistent disk storage is charged for the amount of provisioned space for each VM... even if they are stopped." This confirms that option D would incur persistent costs, making it more expensive than option B.
Source: Google Cloud Documentation, "Compute Engine pricing", Section: "Persistent Disk".
Question 32
Show Answer
A. The Google Cloud console's "Create role from selection" feature is used for creating a new custom role within the same project or organization; it does not support copying roles to a different organization.
B. The source of the custom role is your original organization, not the startup's. Furthermore, the console UI is not the correct tool for copying roles between organizations.
D. Copying the role to every individual project is highly inefficient and creates a significant management burden. This approach is not scalable, as the role would need to be copied again for any new projects.
1. Google Cloud SDK Documentation, gcloud iam roles copy: The official documentation for the command explicitly provides flags for specifying a source and destination organization. The command gcloud iam roles copy --source-organization=SOURCEORGANIZATIONID --destination-organization=DESTINATIONORGANIZATIONID is the direct method for this task.
Source: Google Cloud SDK Documentation, gcloud iam roles copy, Command Flags section. https://cloud.google.com/sdk/gcloud/reference/iam/roles/copy
2. Google Cloud IAM Documentation, "Creating and managing custom roles": This document explains the scope of custom roles. It states, "If you want the custom role to be available for any project in an organization, create the role at the organization level." Copying the role to the destination organization (Option C) achieves this, whereas copying to individual projects (Option D) does not follow this best practice.
Source: Google Cloud IAM Documentation, "Creating and managing custom roles", section "Custom role availability". https://cloud.google.com/iam/docs/creating-custom-roles#custom-role-availability
3. Google Cloud IAM Documentation, "Copying an existing role": This section details the process for duplicating roles and explicitly recommends using the gcloud iam roles copy command for copying a role to another project or organization.
Source: Google Cloud IAM Documentation, "Creating and managing custom roles", section "Copying an existing role". https://cloud.google.com/iam/docs/creating-custom-roles#copying-role
Question 33
Show Answer
B: This solution is overly complex. It requires provisioning and managing a Compute Engine instance, installing and configuring a third-party stack (Kibana), and setting up a multi-service pipeline. This contradicts the requirement for a simple solution.
C: Firewall Rules Logging records network traffic connections that are allowed or denied by firewall rules. It does not log the creation, modification, or deletion of the firewall rules themselves, nor does it address instance creation.
D: This approach is designed for long-term storage and periodic, batch analysis, not for real-time monitoring and alerting. While powerful for forensics, it does not provide the immediate insight needed to respond to ongoing security events.
1. Google Cloud Documentation, Cloud Logging, "Overview of log-based metrics": "Log-based metrics are Cloud Monitoring metrics that are derived from the content of log entries. ... You can use log-based metrics to create charts and alerting policies in Cloud Monitoring." This document confirms the core mechanism of option A.
2. Google Cloud Documentation, Cloud Monitoring, "Alerting on log-based metrics": "You can create an alerting policy that notifies you when a log-based metric meets a specified condition." This directly supports the alerting aspect of option A as the simplest method.
3. Google Cloud Documentation, Cloud Audit Logs, "Compute Engine audit logging information": This page lists the audited methods for the Compute Engine API. It shows that actions like v1.compute.firewalls.insert and v1.compute.instances.insert are captured as Admin Activity audit logs, which are the source data for the solution in option A.
4. Google Cloud Documentation, VPC, "Firewall Rules Logging overview": "Firewall Rules Logging lets you audit, verify, and analyze the effects of your firewall rules. For example, you can determine if a firewall rule designed to deny traffic is functioning as intended." This clarifies that this feature logs traffic connections, not administrative changes to the rules themselves, making option C incorrect.
5. Google Cloud Documentation, Cloud Logging, "Overview of routing and storage": This document describes sink destinations. It positions BigQuery for "big data analysis" and Cloud Storage for "long-term, cost-effective storage," highlighting that these are not the primary tools for simple, real-time alerting, which makes option D less suitable than A.
Question 34
Show Answer
A. This describes using the Policy Analyzer. While it can determine effective permissions, the primary and most direct step to validate assigned roles is to view the IAM policy itself.
B. Audit logs confirm that permission errors have occurred, but they don't show the current role configuration. They are for post-event analysis, not for validating the current state of permissions.
C. Organization policies apply constraints on resource configurations (e.g., restricting locations), but they do not grant permissions. The issue described is a lack of permissions, which is managed by IAM roles.
---
1. Google Cloud Documentation, IAM - View access to a project, folder, or organization: "You can get a list of all IAM principals... who have IAM roles for a project... The list includes principals who have been granted roles on the project directly, and principals who have inherited roles from a folder or organization." This directly supports option D as the standard procedure for checking roles and inheritance.
2. Google Cloud Documentation, IAM - Policy inheritance: "When you grant a role to a user at a level in the resource hierarchy, they inherit the permissions from that role for all resources under that level... For example, if you grant a user the Project Editor role at the organization level, they can edit any project in the organization." This explains the inheritance concept mentioned in option D.
3. Google Cloud Documentation, Policy Intelligence - Policy Analyzer overview: "Policy Analyzer lets you find out what principals have access to which Google Cloud resources." This describes the functionality in option A, which is a powerful but secondary tool to directly viewing the IAM policy for role validation.
4. Google Cloud Documentation, Cloud Audit Logs - Overview: "Cloud Audit Logs maintains the following audit logs for each project, folder, and organization: Admin Activity audit logs, Data Access audit logs, System Event audit logs, and Policy Denied audit logs." This confirms that logs are for recording events, which is the focus of option B.
Question 35
Show Answer
A. Using a human's account for an automated process is an anti-pattern. It breaks automation and creates security risks associated with user credentials being used programmatically.
C. Attaching a service account with all required permissions directly to the instance violates the principle of least privilege, making the instance a high-value target for attackers.
D. Creating and managing service account key files is strongly discouraged. It introduces the risk of key leakage and adds the overhead of key rotation and management.
1. Google Cloud Documentation, "Service account impersonation": "Impersonation lets a service account or user act as another service account... When you use impersonation, you give a principal (the one doing the impersonating) a limited and temporary permission to act as a service account with more privilege." This directly supports the mechanism described in option B.
2. Google Cloud Documentation, "Best practices for using service accounts", Section: "Use service account impersonation for temporary, elevated access": "By using impersonation, you can avoid elevating the principal's permissions permanently. This helps you enforce the principle of least privilege." This explicitly recommends the pattern in option B.
3. Google Cloud Documentation, "Best practices for working with service accounts", Section: "Avoid using service account keys": "Service account keys are a security risk if they are not managed correctly... We recommend that you use other, more secure ways to authenticate." This directly contradicts the approach in option D.
4. Google Cloud Security Foundations Guide, Page 33, Section: "4.2.2 Principle of least privilege": "Grant roles at the smallest scope possible... Grant roles that provide only the permissions required to perform a task." Option C violates this by granting all permissions upfront, while option B adheres to it by providing elevated permissions only when needed.