Q: 11
A BigQuery table is used for real-time dashboards and requires a high volume of
small updates and deletes to individual rows. Which BigQuery feature or capability
should be leveraged to ensure efficient and performant data manipulation?
Options
Discussion
B imo
B here. Clustered and partitioned tables make those row-level updates way faster in BigQuery, especially with high DML volume.
Probably B, since if you aren't clustering/partitioning your table, DML on BigQuery gets super slow for row updates.
A . B looks tempting but the firewall thing doesn't give you Private Google Access, so the VM wouldn't actually reach Cloud Storage without an external IP. A is the only way that matches what they want, internal only, still able to pull from GCS. Anyone see it differently?
B - had something like this in a mock exam and clustering plus partitioning made DML on BigQuery way more performant for lots of row-level changes. Others worked, but nowhere near as efficient in practice.
A saw something close in a practice test. Private Google Access lets internal-only VMs reach GCS via gsutil even in locked-down environments.
Pretty sure A is right here. If you give the VM only an internal IP and enable Private Google Access, it can use gsutil to pull stuff from Cloud Storage without needing general public internet. That's exactly what Google recommends for secure zones like this. Saw a similar question in some practice sets. Open to other takes if I'm missing something.
Be respectful. No spam.
Q: 12
A script is migrating a Cloud Storage bucket and needs to copy all objects from
`gs://source-bucket/data/` to `gs://destination-bucket/backup/` efficiently.
Which command is best for this task?
Options
Discussion
Would the answer change if "best performance" instead of just security was required? That might impact whether VPN alone is enough.
I don’t think B is right since it doesn’t do efficient syncing-D is better for this. D
Saw this style in a recent practice exam, it was D.
Probably D, had something like this in a mock. Flexible supports VPC access and Cloud VPN lets you reach the private DB without exposing it. Saw similar wording before, D fits the security need.
Be respectful. No spam.
Q: 13
A company is building a new application on GKE. The application needs to
dynamically provision and attach high-performance, block-level storage (persistent
disk) to its containers. The storage must be highly available within the region and
automatically managed by GKE. Which component enables this functionality?
Options
Discussion
A . PVs and PVCs with the Compute Engine CSI driver is the standard way to handle dynamic, block-level, high-perf storage in GKE. The other options either aren’t block storage or need too much manual work. Chime in if you see something I missed.
D tbh. If the question is about just attaching high-perf persistent disk, manual setup with kubectl and mounting PDs should work for flexibility. I know it isn't as automatic as PV/PVC but you still get block storage and can control how it's attached. Maybe I'm missing something about automation here though, open to corrections.
A
B not A. I thought Cloud Storage Fuse would work since you can mount storage in containers, but it's not actually block-level and won't give the high-performance disk access that's needed for this kind of workload. Pretty sure B is OK if performance isn't critical, but not for fast persistent disk use. Jump in if you disagree.
Maybe D since triggering a failover on Cloud SQL tests how the DB side handles disruptions, but if the question says "resilience" for the whole auth layer, should we focus more on the app or DB? Would it change if they cared most about database availability?
Be respectful. No spam.
Q: 14
A manufacturing company needs to collect time-series operational data from
50,000 factory floor sensors. The data is small, high-volume, and constantly
streaming. The data needs to be aggregated and analyzed in real-time. Which two
services should form the core of the ingestion and analysis pipeline?
Options
Discussion
Makes sense to go with D here. Pub/Sub is built for high-throughput streaming ingestion, and Dataflow does real-time aggregation and analytics-matches exactly what the question asks about. Saw similar logic in some practice tests. Pretty sure that's the combo they'd expect on the exam.
C vs D but I'd probably pick C here, since Bigtable is meant for handling high-volume time-series data and Compute Engine can run custom analysis jobs. Pub/Sub and Dataflow sound good, but not sure they're the best for complex analytics. Open to pushback if someone has used Pub/Sub/Dataflow in a similar factory IoT setup.
Probably B, since Stackdriver handles real-time metrics with low latency. If the question said "periodic" instead of "real-time," then A or C might make sense. Does the customer actually need second-by-second updates or just near real-time?
Be respectful. No spam.
Q: 15
GlobalTech's data science team needs to read data from a specific BigQuery
dataset (project_ds.customer_analytics) but must not be able to modify or delete
any data. What is the most restrictive and appropriate IAM role to assign to the data
science group?
Options
Discussion
Pretty sure it's C. Had something like this in a mock, and Data Viewer is the most restrictive for read-only access. It doesn't let them edit or delete anything, which fits exactly. Agree?
C/D? I don't think A is right for App Engine since instance groups are more for Compute Engine, not App Engine. Option C using LB and VPC sounds doable, but it feels a bit extra for just testing. D maybe, since new app instances could isolate changes. Not totally sure though, those load balancer answers seem like they might be traps.
Not B, A. I think using Instance Group Updater works for partial rollouts, but not 100% sure here. Anyone else pick this?
Its C, official docs and practice tests mention Data Viewer for this exact use case.
Probably C, since Data Viewer gives just read access, no edit or delete rights. That lines up with least privilege and keeps the team from making changes. Makes sense for this scenario unless I missed something.
C vs D. Seen similar traffic-splitting questions on official practice, and both these options use load balancing for gradual rollout, which matches what you'd want for testing updates in production.
I’m pretty sure it’s B. App Engine has built-in versioning and you can split traffic for canary tests, so no need for load balancers or new VPCs. Anyone disagree?
Be respectful. No spam.
Q: 16
A company is building a new application on Cloud Run. The application must
process data from a Cloud Storage bucket, and the security policy dictates that the
Cloud Run service should *only* be able to access *that single* bucket. The goal is
to enforce the principle of least privilege. What is the most precise configuration?
Options
Discussion
Option A Need the allow rule first with higher priority (smaller number), then a deny-all with lower priority to catch everything else. That's standard GCP firewall ordering. Unless I've missed something here, A is right.
A tbh. Allow the AD-specific egress traffic first with higher priority (lower number), then deny everything else at a lower priority. Google Cloud firewall rules process lowest numbers first, so the allow has to come before the deny. Pretty sure this is the right approach for least privilege.
Had something like this in a mock. A
Be respectful. No spam.
Q: 17
A company is deploying a new web service to GKE and must ensure that all network
traffic between the microservices (Service A calling Service B) remains encrypted
and authenticated, regardless of the underlying network configuration. What is the
Google-recommended approach to achieve this zero-trust networking model?
Options
Discussion
Nah, I think B is better because building internal skills is cheaper long term. Option D looks tempting but consultants don't really help you with future cost optimization the way upskilling does. Anyone disagree?
B imo. Upskill existing staff with structured certs plan is way more cost-effective than just hiring consultants.
Be respectful. No spam.
Q: 18
GlobalTech is planning a major application upgrade and requires a deployment
strategy that minimizes downtime and provides an instant rollback capability in
case of critical failure. What is the most appropriate deployment pattern for their
GKE-hosted application?
Options
Discussion
A . Blue/Green is designed for quick rollback with minimal downtime, which matches their requirements.
This looks like one from my exam last year in a practice test. B is the way to go.
B is great for gradual rollout but for instant rollback it's definitely A here.
A, not B. Blue/Green gives instant rollback which is exactly what they're asking for.
B imo, but if the requirement was to include jobs from multiple projects or across orgs, would that change things?
Be respectful. No spam.
Q: 19
GlobalTech is implementing a new service for processing image uploads from
customers. This service is event-driven (triggered by a Cloud Storage upload) and
must scale from zero to handle unpredictable, sporadic spikes in usage, minimizing
operational overhead and cost for idle time. What is the ideal serverless compute
choice?
Options
Discussion
D (encountered exactly similar question in my exam). Organization Policy with constraints/compute.vmExternalIpAccess is the scalable way to do this.
Its D, org policy with vmExternalIpAccess lets you control external IPs across all VPCs. No brainer here.
Be respectful. No spam.
Q: 20
The GKE cluster needs to access Google Cloud services (e.g., Cloud Storage,
BigQuery) without using external IP addresses or traversing the public internet.
What is the recommended, secure networking configuration for the cluster's
subnet?
Options
Discussion
Option C, That's the standard multi-region DR setup on GCP using managed instance groups and global load balancing, pretty sure that's what Google recommends. If someone thinks D is better, let me know why!
B tbh, since Cloud VPN keeps all traffic private. Might be a trick with option C though.
C for sure
Call it D, since separate projects might give extra isolation for disaster recovery. But maybe that's a trap option?
Be respectful. No spam.
Question 11 of 20 · Page 2 / 2