Get Ready Smarter for the SAP-C02 Exam with Our Free and Accurate SAP-C02 Exam Questions – Updated for 2025.
At Cert Empire, we are dedicated to providing the latest and most reliable exam questions for students preparing for the Amazon SAP-C02 Exam. To make studying easier, we’ve made sections of our SAP-C02 exam resources free for everyone. You can practice as much as you like with Free SAP-C02 Practice Test.
Question 1
Show Answer
A. A standard CloudTrail trail created in the management account will only log API calls for that single account, not for all member accounts in the organization.
B. Creating and managing a separate CloudTrail trail and S3 bucket in each member account creates maximum operational overhead, directly contradicting a key requirement of the question.
D. This option introduces unnecessary complexity with Amazon SNS and an external system. S3 Versioning is a simpler, built-in mechanism to track changes, resulting in lower operational overhead.
1. AWS CloudTrail User Guide, "Creating a trail for an organization": This document states, "You can create a trail in the management account that logs events for all AWS accounts in that organization. This is sometimes called an organization trail." This supports creating a single trail in the management account for minimal overhead.
2. AWS Organizations User Guide, "Enabling AWS CloudTrail in your organization": "When you create an organization trail, a trail with the name that you choose is created in every AWS account that belongs to your organization. This trail logs the activity from each account and delivers the log files to the Amazon S3 bucket that you specify." This confirms the centralized management and logging for all accounts.
3. Amazon S3 User Guide, "Using versioning in S3 buckets": "Versioning is a means of keeping multiple variants of an object in the same bucket. You can use the S3 Versioning feature to preserve, retrieve, and restore every version of every object stored in your buckets." This directly addresses the requirement to track changes.
4. Amazon S3 User Guide, "Configuring MFA delete": "To provide an additional layer of security, you can configure a bucket to require multi-factor authentication (MFA) for any request to permanently delete an object version or change the versioning state of the bucket." This supports the security requirement.
Question 2
Show Answer
A. This option introduces unnecessary complexity and operational overhead by requiring a multi-step process (NFS -> S3 -> EFS) and custom logic (Lambda function) instead of a single, managed service.
B. AWS Storage Gateway (File Gateway) primarily provides on-premises applications with file-based access to Amazon S3. It does not directly replicate data to EFS, making it an indirect and inefficient solution for this use case.
C. While this option uses DataSync, it directs the data to S3 first, requiring a second step with a Lambda function to move it to EFS. A direct DataSync transfer to EFS is far more operationally efficient.
1. AWS DataSync User Guide, "What is AWS DataSync?": This document explicitly states that DataSync is an online data transfer service that automates moving data between on-premises storage systems (like NFS) and AWS Storage services (like Amazon EFS). It highlights features like end-to-end security and data integrity, which contribute to operational efficiency.
Source: AWS Documentation, https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html, Section: "What is AWS DataSync?".
2. AWS DataSync User Guide, "Creating a location for Amazon EFS": This guide provides instructions for configuring an Amazon EFS file system as a destination location for a DataSync task, confirming the direct transfer capability from a source like NFS.
Source: AWS Documentation, https://docs.aws.amazon.com/datasync/latest/userguide/create-efs-location.html, Introduction section.
3. AWS DataSync User Guide, "Using AWS DataSync with AWS Direct Connect": This section details how to use DataSync over a Direct Connect connection. It recommends using a private virtual interface (VIF) and VPC endpoints for private, secure data transfer, which aligns with the most efficient and secure architecture.
Source: AWS Documentation, https://docs.aws.amazon.com/datasync/latest/userguide/datasync-direct-connect.html, Introduction section.
4. AWS Storage Blog, "Migrating storage with AWS DataSync": This official blog post describes common migration patterns and explicitly mentions the capability of DataSync to copy data between NFS shares and Amazon EFS file systems as a primary use case, reinforcing its suitability and efficiency for this scenario.
Source: AWS Blogs, https://aws.amazon.com/blogs/storage/migrating-storage-with-aws-datasync/, Paragraph 2.
Question 3
Show Answer
B: Lambda reserved concurrency is a feature for guaranteeing execution environments and preventing throttling; it is not a cost-saving mechanism and does not provide a discount on usage.
C: Compute Savings Plans do not apply to Amazon MemoryDB for Redis. MemoryDB has its own pricing model using reserved nodes for discounts, separate from the Savings Plans for compute services.
D: This option is incorrect for two reasons: Compute Savings Plans do not cover MemoryDB cache nodes, and Lambda reserved concurrency is not a cost-saving feature.
1. AWS Documentation - Savings Plans User Guide: "Savings Plans offer a flexible pricing model that provides savings on AWS usage. You can save up to 72 percent on your AWS compute workloads... EC2 Instance Savings Plans provide the lowest prices, offering savings up to 72 percent in exchange for commitment to a specific instance family in a specific Region... Compute Savings Plans provide flexibility and help to reduce your costs by up to 66 percent... This automatically applies to EC2 instance usage regardless of instance family, size, AZ, Region, OS or tenancy, and also applies to Fargate or Lambda usage." This confirms EC2 Instance SP for highest EC2 savings and Compute SP for Lambda.
2. AWS Documentation - Amazon MemoryDB for Redis Pricing: The official pricing page states, "With MemoryDB reserved nodes, you can save up to 55 percent over On-Demand node prices in exchange for a commitment to a one- or three-year term." This identifies reserved nodes as the correct cost-saving model for MemoryDB.
3. AWS Documentation - Lambda Developer Guide, "Configuring reserved concurrency": "Reserved concurrency creates a pool of requests that only a specific function can use... Reserving concurrency has the following effects... It is not a cost-saving feature." This explicitly states that reserved concurrency is for performance and availability, not for reducing costs.
Question 4
Show Answer
A. Creating new dashboards per region and aggregating in QuickSight is redundant and adds unnecessary operational overhead, as the default S3 Storage Lens dashboard already aggregates data across all regions.
B. This custom solution using Lambda, S3, and Athena requires significant development, deployment, and maintenance effort, which is the opposite of "least operational overhead" compared to a managed service.
D. An event-driven approach with CloudTrail, EventBridge, and Lambda is complex to set up and maintain. It primarily tracks new events, making it less suitable for a comprehensive, periodic overview of all existing objects.
1. AWS Documentation: Amazon S3 User Guide - Amazon S3 Storage Lens.
Section: "What is Amazon S3 Storage Lens?" states, "S3 Storage Lens aggregates your metrics and displays the information in the Dashboards section of the Amazon S3 console."
Section: "S3 Storage Lens dashboards" explains, "S3 Storage Lens provides a default dashboard that is named default-account-dashboard. This dashboard is preconfigured by S3 to help you visualize summarized storage usage and activity trends across your entire account." This confirms it is multi-region by default.
2. AWS Documentation: Amazon S3 User Guide - S3 Storage Lens metrics glossary.
Section: "Data protection metrics" lists UnencryptedObjectCount and TotalObjectCount, which are used to calculate the percentage of unencrypted objects displayed on the dashboard.
Section: "Storage summary metrics" lists BucketCount, confirming this metric is available.
3. AWS Documentation: Amazon S3 User Guide - Using the S3 Storage Lens default dashboard.
This section details that the default dashboard is available at no additional cost and is updated daily, reinforcing the low operational overhead. It states, "The default dashboard is automatically created for you when you first visit the S3 Storage Lens dashboards page in the Amazon S3 console."
Question 5
Show Answer
B: Amazon FSx for Lustre is a high-performance file system designed for workloads like HPC and machine learning, not for general-purpose NFS applications. EFS is the more appropriate service.
C: This option uses the Amazon EC2 launch type for ECS, which violates the requirement to not provision or manage underlying infrastructure. The user is responsible for the EC2 container instances.
D: This uses the EC2 launch type, which is not serverless. Additionally, EBS Multi-Attach provides shared block storage, not a file system like NFS, and requires a cluster-aware file system to manage access.
1. AWS Fargate Documentation, "What is AWS Fargate?": "AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS)." This supports the serverless compute requirement.
2. Amazon ECS Developer Guide, "Amazon EFS volumes": "With Amazon EFS, the storage capacity is elastic... Your Amazon ECS tasks running on both Fargate and Amazon EC2 instances can use EFS. ... To use Amazon EFS volumes with your containers, you must define the volume and mount point in your task definition." This confirms the integration method described in option A.
3. Amazon Elastic File System User Guide, "What is Amazon Elastic File System?": "Amazon Elastic File System (Amazon EFS) provides a simple, serverless, set-and-forget, elastic file system... It is built to scale on demand to petabytes without disrupting applications... It supports the Network File System version 4 (NFSv4.1 and NFSv4.0) protocol." This confirms EFS as the correct NFS-compatible storage solution.
4. Amazon FSx for Lustre User Guide, "What is Amazon FSx for Lustre?": "Amazon FSx for Lustre is a fully managed service that provides cost-effective, high-performance, scalable storage for compute workloads. ... The high-performance file system is optimized for workloads such as machine learning, high performance computing (HPC)..." This distinguishes its use case from the general-purpose need in the question.
Question 6
Show Answer
A. AWS DataSync is a data transfer service, not the native S3 replication feature already in use. Introducing it would be an unnecessary architectural change and is not the intended tool for this specific use case.
B. Replication rules are configured on the source bucket. Creating a new destination bucket does not simplify or solve the problem of monitoring a subset of objects from the single source bucket.
C. Amazon S3 Transfer Acceleration speeds up object uploads to an S3 bucket from clients over the public internet, not the replication process between S3 buckets within the AWS network.
1. Amazon S3 Developer Guide - Replicating objects using S3 Replication Time Control (S3 RTC): "S3 Replication Time Control (S3 RTC) helps you meet compliance or business requirements for data replication by providing a predictable replication time. S3 RTC replicates 99.99 percent of new objects stored in Amazon S3 within 15 minutes of upload." This document also details how to enable S3 RTC in a replication rule.
2. Amazon S3 Developer Guide - Replication configuration: This section explains how to create replication rules and specifies that a rule can apply to all objects or a subset. "To select a subset of objects, you can specify a key name prefix, one or more object tags, or both in the rule." This supports the use of a prefix-based filter.
3. Amazon S3 Developer Guide - Monitoring replication with Amazon S3 event notifications: "You can use Amazon S3 event notifications to receive notifications for S3 Replication Time Control (S3 RTC) events... For example, you can set up an event notification for the s3:Replication:OperationMissedThreshold event to be notified when an object eligible for S3 RTC replication doesn't replicate in 15 minutes." This confirms the monitoring and alerting capability via EventBridge.
4. Amazon S3 Developer Guide - Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration: "Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket." This clarifies that its purpose is for client-to-bucket transfers, not inter-bucket replication.
Question 7
Show Answer
B. VM Import/Export is an offline process. It requires exporting the entire 500 TB VM image and then uploading it, which would cause extensive downtime, violating the core requirement.
C. An AWS Snowball device is for offline data transfer. While suitable for large data volumes, it is not the optimal choice for minimizing downtime when a high-bandwidth (10 Gbps) network connection is available for online, incremental replication.
E. AWS Database Migration Service (DMS) migrates the database data, not the entire server VM. This would involve re-platforming to a service like Amazon RDS, which adds complexity and risk compared to a direct lift-and-shift of the existing server using SMS.
1. AWS Server Migration Service (SMS) User Guide: "AWS Server Migration Service (AWS SMS) is an agentless service which makes it easier and faster for you to migrate thousands of on-premises workloads to AWS. AWS SMS allows you to automate, schedule, and track incremental replications of live server volumes, making it easier for you to coordinate large-scale server migrations." (Source: AWS Server Migration Service User Guide, "What Is AWS Server Migration Service?")
2. AWS Server Migration Service (SMS) User Guide, "How AWS Server Migration Service Works": "AWS SMS incrementally replicates your server VMs as Amazon Machine Images (AMIs)... The incremental replication transfers only the delta changes to AWS, which results in faster replication times and minimum network bandwidth consumption." This directly supports the minimal downtime requirement.
3. AWS Documentation, "VM Import/Export, What Is VM Import/Export?": "VM Import/Export enables you to easily import virtual machine (VM) images from your existing virtualization environment to Amazon EC2..." The process described is a one-time import of a static image, not a continuous replication of a live server, making it unsuitable for minimal downtime scenarios.
4. AWS Database Migration Service (DMS) User Guide, "What is AWS Database Migration Service?": "AWS Database Migration Service (AWS DMS) helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime..." While DMS minimizes downtime for the database data, it does not migrate the server OS or configuration, making SMS a better fit for a complete server lift-and-shift.
Question 8
Show Answer
D. Moving backups to a cold tier is a cost and lifecycle management strategy; it does not provide protection against deletion commands from a privileged user.
E. AWS Backup natively uses backup vaults for storage. While these vaults use Amazon S3, you don't configure Backup to write directly to a user-managed S3 bucket with Object Lock; you use the integrated AWS Backup Vault Lock feature.
F. Implementing least privilege for the backup role is a standard security best practice but is insufficient protection against an already compromised privileged user who can alter IAM roles and policies.
1. AWS Backup Developer Guide, "Security in AWS Backup": The section "Resilience" outlines best practices against ransomware, stating: "To protect your backups from inadvertent or malicious activity... we recommend that you copy your backups to accounts that are isolated from your production accounts... You can also use AWS Backup Vault Lock to make your backups immutable." This supports options A and C.
2. AWS Backup Developer Guide, "Protecting backups from manual deletion": This section details AWS Backup Vault Lock. It specifies, "In compliance mode, a vault lock can't be disabled or deleted by any user or by AWS. The retention period can't be shortened." This confirms the immutability provided by option C.
3. AWS Organizations User Guide, "Service control policies (SCPs)": The guide explains, "SCPs are a type of organization policy that you can use to manage permissions in your organization... SCPs offer central control over the maximum available permissions for all accounts in your organization," including restricting privileged users. This supports using an SCP (Option B) as a guardrail.
4. AWS Security Blog, "How to help protect your backups from ransomware with AWS Backup": This article explicitly recommends a three-pronged strategy: "1. Centralize and segregate your backups into a dedicated backup account. 2. Make your backups immutable by using Backup Vault Lock. 3. Secure your backup account with preventative controls [such as SCPs]." This directly validates the combination of A, B, and C.
Question 9
Show Answer
A. Managing IAM roles and policies individually across hundreds of accounts is not scalable and lacks the strong, centralized enforcement provided by SCPs.
B. Attaching policies to individual IAM users across hundreds of accounts is operationally complex and does not scale effectively for an organization-wide requirement.
D. AWS Security Hub is a detective control service used for monitoring compliance and aggregating security findings; it does not prevent or deny actions.
1. AWS Organizations User Guide, "Service control policies (SCPs)": "SCPs are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization... SCPs are powerful because they affect all users, including the root user, for an account." This document also provides an example SCP to "Deny access to AWS based on the requested AWS Region".
2. AWS Control Tower User Guide, "How guardrails work": "Preventive guardrails are enforced using service control policies (SCPs)... A preventive guardrail ensures that your accounts maintain compliance, because it disallows actions that lead to policy violations. For example, the guardrail Disallow changes to AWS Config rules set up by AWS Control Tower prevents any IAM user or role from making changes to the AWS Config rules that are created by AWS Control Tower." This demonstrates the preventative nature of controls implemented via SCPs.
3. AWS Identity and Access Management User Guide, "AWS global condition context keys": The documentation for the aws:RequestedRegion key states, "Use this key to compare the Region that is specified in the request with the Region that is specified in the policy." This is the specific key used in an SCP to enforce regional restrictions.
4. AWS Security Hub User Guide, "What is AWS Security Hub?": "AWS Security Hub is a cloud security posture management (CSPM) service that performs security best practice checks, aggregates alerts, and enables automated remediation." This confirms its role as a monitoring and detection service, not a preventative one.
Question 10
Show Answer
A. Using AWS Fargate requires pulling the entire image, including the massive base layer, from Amazon ECR for every new task, which will result in very long startup times.
B. AWS App Runner is a fully managed service, often built on Fargate, and would face the same performance bottleneck of pulling the large image from the registry at startup.
D. This option is technically invalid. AWS Fargate is a serverless compute option where AWS manages the underlying infrastructure; you cannot specify a custom AMI for Fargate nodes.
1. AWS Compute Blog, "Speeding up container-based application launches with image pre-caching on Amazon ECS": This article discusses strategies for reducing container launch times. It explicitly states, "For EC2 launch type, you can create a custom AMI with container images pre-pulled on the instance. This is the most effective way to reduce image pull latency..." This directly validates the approach in option C.
2. AWS Documentation, "Amazon ECS-optimized AMIs": This documentation, while focusing on the standard AMIs, provides the basis for customization. It notes, "You can also create your own custom AMI that meets the Amazon ECS AMI specification." This confirms that creating a custom AMI with pre-loaded software (like a container base image) is a standard and supported practice for ECS on EC2.
3. AWS Documentation, "AWS Fargate": The official documentation describes Fargate as a technology that "removes the need to provision and manage servers." This serverless model means users do not have access to the underlying instances to customize the AMI, which invalidates option D and highlights the performance issue in options A and B.
4. AWS Documentation, "Amazon EKS on AWS Fargate": In the considerations section, the documentation states, "You don't need to... update AMIs." This confirms that for EKS on Fargate, custom AMIs are not a feature, making the solution proposed in option D impossible to implement.
Question 11
Show Answer
B: Increasing maxconnections consumes more database memory and is not a scalable, long-term solution for inefficient connection management from a serverless application.
C: Adding more replica instances (instance scaling) does not solve the core problem of connection exhaustion; it only adds more instances that can also run out of connections.
D: This option incorrectly combines two separate services. Amazon RDS Proxy and the Aurora Data API are different solutions for connection management; you cannot configure a Data API endpoint on an RDS Proxy.
1. AWS Documentation, Amazon RDS User Guide: "Managing connections with Amazon RDS Proxy". This document states, "RDS Proxy allows applications to pool and share connections established with the database. This improves database efficiency and application scalability... This approach is especially useful for serverless applications that have many short-lived connections."
2. AWS Documentation, Amazon RDS User Guide: "Using Amazon RDS Proxy with AWS Lambda". This guide explicitly details the problem scenario: "A Lambda function can establish a large number of simultaneous connections... This large number of connections can overwhelm the database... With RDS Proxy, your Lambda function can reach high concurrency levels without exhausting database connections."
3. AWS Documentation, Amazon RDS User Guide: "Overview of RDS Proxy endpoints". This section explains the functionality of proxy endpoints, including how custom read-only endpoints can be created to connect to the reader instances in a cluster. It states, "For a reader farm, you can associate a read-only endpoint with the proxy. This way, your proxy can connect to the reader DB instances in a multi-AZ DB cluster."
4. AWS Documentation, Amazon Aurora User Guide: "Parameter groups for Aurora DB clusters". The documentation for the maxconnections parameter notes that its default value is derived from the DBInstanceClassMemory variable, illustrating the link between connection count and instance memory resources, which supports why simply increasing it (Option B) is not an optimal solution.
Question 12
Show Answer
A. Single SCP and single OU cannot address differing compliance needs; manual Config and logging setup increases operational effort compared with Control Tower’s automated landing-zone deployment.
C. Manually designing OUs, SCPs, Config aggregators, and logging meets requirements but requires continuous custom maintenance, producing higher operational overhead than Control Tower’s managed solution.
D. Control Tower plus per-account IAM SAML providers mandates manual identity federation setup and lifecycle management in every account, eliminating the low-overhead benefit of AWS Identity Center.
1. AWS Control Tower User Guide, “Benefits of AWS Control Tower” & “How AWS Control Tower works,” Sections 1.1–1.3 (2023-09-26).
2. AWS Control Tower Landing Zone: Governance using guardrails (SCPs & AWS Config Rules), User Guide §3.2.
3. AWS IAM Identity Center Administrator Guide, “Enable identity federation using AD FS,” Steps 1–6 (2023-08-02).
4. AWS Whitepaper: “Organizing Your AWS Environment Using Multiple Accounts,” pp. 15–17, “Using AWS Control Tower for Low-Touch Governance” (2022).
5. MIT Cybersecurity Course Notes (6.858), Lecture “Cloud Governance Models,” slide deck pp. 10–11 describing automated landing-zone frameworks (citing AWS Control Tower).
Question 13
Show Answer
A. This is incorrect because the question explicitly states the account cannot be part of an AWS Organization. Tagging policies enforce tag compliance, not resource creation restrictions like instance type or Region.
B. This is incorrect because Service Control Policies (SCPs) are a feature of AWS Organizations. Since the account cannot be part of the organization, SCPs cannot be applied.
C. This is incorrect because Reserved Instances are a billing discount mechanism and do not restrict permissions. A developer with the necessary IAM permissions could still launch any instance type, regardless of what is reserved.
1. IAM User Guide - Actions, resources, and condition keys for Amazon EC2: This document lists the ec2:InstanceType condition key, which can be used in an IAM policy to control which instance types a user can launch. (See the table under the "Condition keys for Amazon EC2" section).
2. IAM User Guide - AWS global condition context keys: This guide details the aws:RequestedRegion key, which can be used in the Condition block of an IAM policy to restrict actions to specific AWS Regions. (See the table of "Global condition context keys").
3. AWS Organizations User Guide - Service control policies (SCPs): This document states, "SCPs are a type of organization policy that you can use to manage permissions in your organization." This confirms SCPs are only applicable to accounts within an AWS Organization. (See the "Introduction to SCPs" section).
4. Amazon EC2 User Guide for Linux Instances - Reserved Instances: This documentation describes Reserved Instances as a billing construct that provides a discount compared to On-Demand pricing, confirming they are not a permissions-enforcement tool. (See the "What are Reserved Instances?" section).
Question 14
Show Answer
A: Routing all IPv6 traffic (::/0) from private subnets to the main internet gateway would assign public IPv6 addresses and make the instances directly accessible from the internet, violating a core requirement.
B: Standard NAT gateways are designed for IPv4 traffic. They perform network address translation for IPv4 addresses and cannot process or route native IPv6 traffic.
D: While NAT gateways can be used for NAT64 (translating IPv6 to IPv4), the purpose-built, standard AWS solution for providing outbound-only internet access for IPv6 is the Egress-Only Internet Gateway, making it the correct choice.
1. AWS Documentation - Egress-only internet gateways: "To allow outbound-only communication over IPv6 from instances in your VPC to the internet, you can use an egress-only internet gateway... An egress-only internet gateway is stateful: It forwards traffic from the instances in the subnet to the internet or other AWS services, and then sends the response back to the instances. It does not allow unsolicited inbound traffic from the internet to your instances." (AWS VPC User Guide, "Egress-only internet gateways" section).
2. AWS Documentation - Enable IPv6 traffic for a private subnet: "Create an egress-only internet gateway for your VPC... In the route table for your private subnet, add a route that points all outbound IPv6 traffic (::/0) to the egress-only internet gateway." (AWS VPC User Guide, "IPv6" section, under "Example routing options").
3. AWS Documentation - NAT gateways: "NAT gateways currently support IPv4 traffic." and "If you have instances in a private subnet that are IPv6-only, you can use a NAT gateway to enable these instances to communicate with IPv4-only services... by using NAT64." This confirms standard NAT gateways are for IPv4, and while NAT64 exists, the EIGW is the direct solution for native IPv6 outbound traffic. (AWS VPC User Guide, "NAT gateways" section).
Question 15
Show Answer
B. CORS is a browser security feature for controlling cross-domain requests; it is not an authentication or authorization mechanism for IAM principals.
C. A Lambda authorizer is a valid authentication method, but it is overly complex for this use case when native AWSIAM authorization directly meets the requirement.
D. Client certificates authenticate the client but are not directly integrated with IAM roles or users for authorization. CloudWatch provides logs and metrics, not end-to-end tracing and service maps like X-Ray.
1. AWS Documentation: Control access to an API with IAM permissions. This document explicitly states, "To control access to your API, you can use IAM permissions... you set the authorizationType property of a method to AWSIAM." It also details the need for the execute-api:Invoke permission. (Amazon Web Services, API Gateway Developer Guide, Section: "Control access to an API with IAM permissions").
2. AWS Documentation: Using AWS X-Ray to trace API Gateway requests. This guide explains, "You can use AWS X-Ray to trace and analyze user requests as they travel through your Amazon API Gateway APIs to the underlying services... X-Ray gives you an end-to-end view of an entire request". (Amazon Web Services, API Gateway Developer Guide, Section: "Tracing and analyzing requests with AWS X-Ray").
3. AWS Documentation: Service maps. This page describes how X-Ray uses trace data to generate a service map, which "shows service nodes, their connections, and health data for each node, including average latency and failures." This directly supports the requirement for service maps and latency analysis. (Amazon Web Services, AWS X-Ray Developer Guide, Section: "Viewing the service map").
4. AWS Documentation: Enabling CORS for a REST API resource. This document clarifies that CORS is for enabling clients in one domain to interact with resources in a different domain, highlighting its purpose is not IAM-based authorization. (Amazon Web Services, API Gateway Developer Guide, Section: "Enabling CORS for a REST API resource").
Question 16
Show Answer
A. An Auto Scaling group is a regional construct and cannot span across multiple VPCs in different AWS Regions. An ALB in one region cannot directly serve EC2 instances in another.
C. An Application Load Balancer is a regional service and cannot span across VPCs in different AWS Regions. Inter-Region VPC peering is for private network traffic, not public web application failover.
D. Amazon Route 53 is a global service, not a regional one; you create record sets in a global hosted zone. This option is less precise than option B, which correctly specifies using a "failover routing policy."
1. AWS Documentation, "Disaster Recovery of Workloads on AWS: Recovery in the Cloud" (July 2021): This whitepaper describes DR strategies. The "Warm Standby" and "Pilot Light" approaches, which are active-passive models, use the exact architecture described in the correct answer. Page 15 states, "For all of these approaches, you can use Amazon Route 53 to resolve your domain name and to check the health of your primary environment. In the event of a disaster, you can have Route 53 fail over to your DR environment."
2. AWS Documentation, "Amazon Route 53 Developer Guide": Under the section "Choosing a routing policy," the guide explains "Failover routing." It states, "Use failover routing when you want to configure active-passive failover. When the primary resource becomes unhealthy, Route 53 automatically responds to queries with the secondary resource." This directly supports the use of Route 53 for the required active-passive configuration.
3. AWS Documentation, "User Guide for Application Load Balancers": In the "What is an Application Load Balancer?" section, it is established that a load balancer serves traffic to targets, such as EC2 instances, in a single region. It states, "You can add one or more listeners to your load balancer... Each listener has a rule that forwards requests to one or more target groups in the same Region." This confirms ALBs are regional.
4. AWS Documentation, "Amazon EC2 Auto Scaling User Guide": The guide's core concepts explain that an Auto Scaling group contains a collection of Amazon EC2 instances within a single AWS Region. The documentation on "Working with multiple Availability Zones" confirms that while an ASG can span AZs, it is confined to the region where it was created.
Question 17
Show Answer
A. Amazon EventBridge does not have a native, direct integration to trigger rules based on file creation events within an Amazon EFS file system. An intermediary polling mechanism would still be required.
B. Amazon EFS does not have a feature called "EFS event notification" that can directly invoke other AWS services like AWS Fargate. This trigger mechanism is fictitious.
D. AWS Lambda functions, whether using a ZIP archive or a container image, have a maximum execution timeout of 15 minutes (900 seconds). This is insufficient for the data processing job that can take up to 2 hours.
1. AWS Lambda Quotas: The official AWS Lambda Developer Guide states the maximum execution duration for a function.
Source: AWS Lambda Developer Guide, Quotas.
Reference: Under the "Function configuration, deployment, and execution quotas" table, the "Timeout" resource has a default of 3 seconds and a maximum of 900 seconds (15 minutes). This directly invalidates option D.
Link: https://docs.aws.amazon.com/lambda/latest/dg/quotas.html
2. Using AWS Lambda with Amazon S3: The Amazon S3 User Guide details how to use S3 event notifications to trigger Lambda functions, which is the pattern proposed in options C and D.
Source: Amazon S3 User Guide, Invoking AWS Lambda functions from Amazon S3.
Reference: The section "Walkthrough: Using an S3 trigger to invoke a Lambda function" describes this exact integration.
Link: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html
3. Orchestrating Long-Running Jobs with Fargate: The pattern of using a short-lived service (like Lambda) to trigger a long-running container task on Fargate is a documented best practice.
Source: AWS Whitepaper, "Serverless Architectures with AWS Lambda".
Reference: Section "Orchestrating Multiple AWS Lambda Functions for a Long-Running Workflow". While this discusses Step Functions, the principle of offloading long-running tasks from Lambda to a suitable service like Fargate is a core concept. The RunTask API for ECS/Fargate is designed for this purpose.
Link: https://d1.awsstatic.com/whitepapers/serverless-architectures-with-aws-lambda.pdf (Page 13 discusses offloading tasks).
4. Amazon EFS Integrations: The EFS documentation does not list a direct event source integration for services like EventBridge or a native "event notification" system for file creation.
Source: Amazon EFS User Guide.
Reference: A review of the "Working with other AWS services" section shows integrations with services like DataSync, Backup, and File Gateway, but no direct, push-based event notification mechanism for file system operations. This invalidates the triggers in options A and B.
Link: https://docs.aws.amazon.com/efs/latest/userguide/how-it-works.html#how-it-works-integrations
Question 18
Show Answer
A. Route 53 failover is not suitable for intermittent application-level errors. It relies on health checks that may not fail for 503 errors and is not immediate due to DNS TTLs.
B. This option has the same drawbacks as option A (DNS failover is not immediate) and adds the unnecessary complexity and cost of a second CloudFront distribution.
D. Using a CloudFront Function or Lambda@Edge to handle this requires writing, deploying, and maintaining custom code, which represents higher operational overhead than a simple configuration change like setting up an origin group.
1. AWS CloudFront Developer Guide - Optimizing high availability with CloudFront origin failover: This document explicitly states, "You can set up origin failover for scenarios that require high availability. To get started, you create an origin group with two origins: a primary and a secondary... If the primary origin is unavailable, or if it returns a specific HTTP response status code that indicates a failure, CloudFront automatically switches to the secondary origin." This directly supports the mechanism in option C.
2. AWS CloudFront Developer Guide - Origin group status codes: This section details the specific HTTP status codes (including 503 Service Unavailable) that can trigger a failover from the primary to the secondary origin within an origin group.
3. AWS Route 53 Developer Guide - Failover routing: This guide explains that failover routing works by "routing traffic to a resource when the resource is healthy and to a different resource when the first resource is unhealthy." This is based on health checks and DNS propagation, which is not immediate and less suitable for handling transient, per-request HTTP errors than CloudFront's origin failover.
4. AWS CloudFront Developer Guide - Comparing Lambda@Edge and CloudFront Functions: This documentation clarifies the capabilities of different edge compute options. It shows that implementing the logic described in option D would require Lambda@Edge, which is more complex and operationally intensive than using the built-in origin failover feature.
Question 19
Show Answer
A. An AWS PrivateLink endpoint service is a regional construct and can only be associated with a Network Load Balancer that exists in the same AWS Region.
C. This option introduces an unnecessary Application Load Balancer (ALB), adding complexity and cost. The NLB can directly target the EC2 instances by IP address without an intermediary ALB.
D. An NLB's 'instance' target group can only register instances by their ID if they are in the same region as the NLB. To target resources in another region, an 'IP' target group must be used.
1. AWS Documentation - Network Load Balancer Target Groups: "You can register targets by instance ID or by IP address... If you specify targets using an IP address, you can use IP addresses from the subnets of the target group's VPC, or from any private IP address range from a peered VPC..." This supports using an IP target group over an Inter-Region VPC peering connection. (Source: AWS Documentation, User Guide for Network Load Balancers, section "Target groups for your Network Load Balancers", subsection "Register targets").
2. AWS Documentation - AWS PrivateLink Concepts: "An endpoint service is a service that you host in your VPC... When you create an endpoint service, you must specify a Network Load Balancer or Gateway Load Balancer for your service in each Availability Zone." This confirms the load balancer must be co-located with the endpoint service in the same region. (Source: AWS Documentation, AWS PrivateLink Guide, section "Concepts", subsection "Endpoint services").
3. AWS Documentation - VPC Peering Basics: "A VPC peering connection enables you to route traffic between the peered VPCs using private IPv4 addresses or IPv6 addresses... Instances in either VPC can communicate with each other as if they are within the same network." This confirms connectivity for the IP-based targets. (Source: AWS Documentation, Amazon VPC Peering Guide, section "What is VPC peering?").
Question 20
Show Answer
A. In-place deployments update instances one by one, causing downtime for each instance during the update. Rolling back requires a full, time-consuming redeployment of the old version.
C. AWS CloudFormation is an Infrastructure as Code (IaC) service, not a dedicated application deployment tool. Using it for application updates is less efficient, and rolling back by pushing another update is slow.
D. AWS OpsWorks is a configuration management service. Like an in-place deployment, it does not inherently provide a rapid, traffic-shifting rollback mechanism as required by the scenario.
1. AWS CodeDeploy User Guide, "Overview of a blue/green deployment": "A blue/green deployment is a deployment strategy in which you create a new environment (the green environment) that is a replica of your production environment (the blue environment)... This strategy allows you to test the new environment before you send production traffic to it. If there's a problem with the green environment, you can roll back to the blue environment immediately."
2. AWS CodeDeploy User Guide, "In-place versus blue/green deployments": This section explicitly contrasts the two methods, noting that for in-place deployments, "To roll back the application, you redeploy a previous revision of the application." For blue/green, "Rolling back is fast and easy. You can roll back to the original environment as soon as a problem is detected."
3. AWS Well-Architected Framework, Operational Excellence Pillar, "OPS 08 - How do you evolve operations?": The framework recommends deployment strategies that reduce the risk of failure, stating, "Use deployment strategies such as blue/green and canary deployments to reduce the impact of failed deployments." It highlights that blue/green deployments allow for rapid rollback by redirecting traffic.
4. AWS Whitepaper: "Blue/Green Deployments on AWS" (May 2020): Page 4 discusses the benefits, stating, "Blue/green deployments provide a number of advantages... It reduces downtime... It also reduces risk; if the new version of your application has issues, you can roll back to the previous version immediately."
Question 21
Show Answer
B. AWS SMS: AWS Server Migration Service (SMS) is a tool for executing the migration of virtual machines, not for the initial discovery, assessment, and planning phases.
C. AWS X-Ray: This service is used for analyzing and debugging performance issues in distributed applications, typically those already running in the cloud, not for pre-migration planning.
E. Amazon Inspector: This is a security vulnerability assessment service for workloads running on AWS. It is not used for discovering or planning the migration of on-premises infrastructure.
1. AWS Application Discovery Service: AWS Documentation, "What Is AWS Application Discovery Service?". It states, "AWS Application Discovery Service helps you plan your migration to the AWS Cloud by collecting usage and configuration data about your on-premises servers."
Source: AWS Documentation. (2023). What Is AWS Application Discovery Service?. AWS. Retrieved from https://docs.aws.amazon.com/application-discovery/latest/userguide/what-is-appdiscovery.html
2. AWS Cloud Adoption Readiness Tool (CART): AWS Cloud Adoption Framework Documentation. "The AWS Cloud Adoption Readiness Tool (CART) is a free, online self-assessment that helps you understand where you are in your cloud journey."
Source: AWS. (2023). AWS Cloud Adoption Framework. AWS. Retrieved from https://aws.amazon.com/cloud-adoption-framework/ (See section on CART).
3. AWS Migration Hub: AWS Documentation, "What is AWS Migration Hub?". It states, "AWS Migration Hub provides a single location to track the progress of application migrations across multiple AWS and partner solutions... Migration Hub also provides a portfolio of migration and modernization tools that simplify and accelerate your projects."
Source: AWS Documentation. (2023). What is AWS Migration Hub?. AWS. Retrieved from https://docs.aws.amazon.com/migrationhub/latest/userguide/what-is-migrationhub.html
4. AWS Server Migration Service (SMS): AWS Documentation, "What Is AWS Server Migration Service?". It describes the service as one that "automates the migration of your on-premises VMware vSphere or Microsoft Hyper-V/SCVMM virtual machines (VMs) to the AWS Cloud." This confirms its role in execution, not planning.
Source: AWS Documentation. (2023). What Is AWS Server Migration Service?. AWS. Retrieved from https://docs.aws.amazon.com/server-migration-service/latest/userguide/what-is-sms.html
Question 22
Show Answer
A. A spread placement group has a hard limit of seven running instances per Availability Zone, so setting a minimum of eight is not possible. This also changes the architecture's intent.
C. AWS does not provide a feature to merge two placement groups. This is not a valid operation.
D. Launching on Dedicated Hosts is a significant architectural and cost change. It is not the most direct or common first step to troubleshoot a transient capacity error.
1. Amazon EC2 User Guide for Linux Instances: Under the section "Troubleshoot placement groups," the guide explicitly states: "If you receive a capacity error when launching an instance in a placement group that already has running instances, stop and start all of the instances in the placement group, and then try the launch again. Starting the instances may migrate them to hardware that has capacity for all the requested instances." (See section: Placement groups > Troubleshoot placement groups).
2. Amazon EC2 User Guide for Linux Instances: The section on "Spread placement groups" details the limitations: "A spread placement group can span multiple Availability Zones within the same Region, and you can have a maximum of seven running instances per Availability Zone per group." This confirms that option A is invalid. (See section: Placement groups > Spread placement groups).
3. AWS API Reference - CreatePlacementGroup: The documentation for creating and managing placement groups does not include any action or parameter for merging existing groups, confirming that option C is not a valid AWS feature. (See the CreatePlacementGroup and related EC2 API actions in the AWS Command Line Interface Reference or SDK documentation).
Question 23
Show Answer
B: It is not possible to encrypt an existing EBS volume "in place." The process requires creating a new, encrypted volume from a snapshot of the original.
D: The specific guardrail for detecting unencrypted EBS volumes is part of the "strongly recommended" guardrail set, not the "mandatory" set. This option would fail to meet the detection requirement.
E: While technically feasible for a single account, this approach does not provide the centralized governance and compliance management across multiple accounts that AWS Control Tower offers, which is a core requirement.
1. Encrypting an unencrypted volume: AWS Documentation, Amazon EC2 User Guide for Linux Instances, section "Encrypting an unencrypted volume". It states, "To encrypt an unencrypted volume, you must create a snapshot of the volume. Then, you can either restore the snapshot to a new, encrypted volume... or you can create an encrypted copy of the snapshot and restore it to a new, encrypted volume." This supports option C.
2. AWS Control Tower Guardrails: AWS Documentation, AWS Control Tower User Guide, section "Guardrail reference". The guide lists the guardrail Detect whether encryption is enabled for EBS volumes attached to EC2 instances (Identifier: AWS-GREBSVOLUMEENCRYPTIONMANDATORY) under the "Strongly recommended" behavior category. This supports option A and invalidates option D.
3. AWS Control Tower Overview: AWS Documentation, AWS Control Tower User Guide, section "What is AWS Control Tower?". It describes Control Tower as "the easiest way to set up and govern a secure, multi-account AWS environment," which aligns with the requirement for a central management solution.
Question 24
Show Answer
A: S3 Standard-IA is not the most cost-effective archival storage class; S3 Glacier Deep Archive offers significantly lower storage costs, fitting the retrieval time tolerance.
B: A Volume Gateway provides block-level storage via the iSCSI protocol, not the required file-level access via NFS.
C: A Tape Gateway presents a virtual tape library (VTL) interface, which is not compatible with the required NFS protocol.
D: This option is logically flawed as it suggests deploying a File Gateway but then moving files to a Tape Gateway, which are distinct and incompatible components.
1. AWS Storage Gateway User Guide, "What is Amazon S3 File Gateway?": "Amazon S3 File Gateway presents a file-based interface to Amazon S3... With a file gateway, you can store and retrieve Amazon S3 objects through standard file protocols such as Network File System (NFS) and Server Message Block (SMB)." This confirms File Gateway is the correct choice for NFS support.
2. Amazon S3 User Guide, "Amazon S3 storage classes": The comparison table in this section shows that S3 Glacier Deep Archive is the "lowest-cost object storage class for long-term retention" and has a "first-byte latency" of "Hours". This aligns with the cost and retrieval requirements.
3. Amazon S3 User Guide, "Managing your storage lifecycle": "You can define rules to transition objects from one storage class to another... For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you create them, or archive objects to the S3 Glacier Deep Archive storage class 60 days after you create them." This confirms the use of lifecycle rules for transitioning to S3 Glacier Deep Archive.
Question 25
Show Answer
A. This method fails the requirement to log all connections in CloudTrail. While fetching the key from Secrets Manager is logged, the subsequent SSH connection from the engineer's client to the EC2 instance is standard network traffic and is not an AWS API call, so it will not be logged by CloudTrail.
B. Similar to option A, running the Systems Manager document is logged in CloudTrail, but the actual SSH connection made afterward is not. This fails the comprehensive logging requirement.
D. This solution is overly complex and misuses Session Manager, which is designed to provide shell access without SSH keys. Furthermore, it fails the connection logging requirement for the same reason as options A and B: the final SSH connection is not an auditable AWS API call.
1. AWS Compute Blog, "New: Using Amazon EC2 Instance Connect for SSH access to your EC2 Instances": "All EC2 Instance Connect API calls are logged by AWS CloudTrail, giving you the visibility you need for governance and compliance." This directly supports the logging requirement. The article also explains, "EC2 Instance Connect does not require the instance to have a public IPv4 address." and "You can use IAM policies to grant and revoke access."
2. AWS EC2 User Guide for Linux Instances, "Connect to your Linux instance with EC2 Instance Connect": "When you connect to an instance using EC2 Instance Connect, the Instance Connect API pushes a one-time-use SSH public key to the instance metadata where it remains for 60 seconds. An IAM policy attached to your IAM user authorizes your user to push the public key to the instance metadata." This confirms the use of temporary, unique keys for connections.
3. AWS EC2 User Guide for Linux Instances, "Set up EC2 Instance Connect": This section details the prerequisites, including the IAM permission ec2-instance-connect:SendSSHPublicKey on the instance resource, which is the action that gets logged in CloudTrail. It states, "All connection requests using EC2 Instance Connect are logged to AWS CloudTrail so you can audit connection requests."
Question 26
Show Answer
A. This is a reactive, not a preventative, solution. AWS Config detects non-compliant resources after they have been created and notifies an administrator, failing to ensure users can only launch approved applications.
B. Kerberos provides authentication for users and services within an EMR cluster (e.g., for Hadoop services). It does not control the provisioning of the cluster itself, its configuration, or AWS resource tagging.
D. This approach is also reactive. While CloudFormation standardizes deployment, relying on AWS Config for enforcement means non-compliant clusters can still be launched, with remediation occurring only after detection.
1. AWS Service Catalog Administrator Guide, "What Is AWS Service Catalog?": "AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS... You can control which IT services and versions are available, the configuration of the available services, and permission access by individual, group, department, or cost center." This directly supports the use of Service Catalog for controlling approved configurations and permissions.
2. AWS Service Catalog Administrator Guide, "Launch constraints": "A launch constraint specifies the IAM role that AWS Service Catalog assumes when an end user launches a product. Without a launch constraint, AWS Service Catalog assumes the end user's IAM role for all of the product's AWS resources." This explains how Service Catalog enforces least privilege for different personas.
3. AWS Service Catalog Administrator Guide, "TagOption library": "A TagOption is a key-value pair that allows administrators to... enforce the creation of tags on provisioned products." This confirms its capability to enforce mandatory tagging.
4. AWS Config Developer Guide, "What Is AWS Config?": "AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources... Config continuously monitors and records your AWS resource configurations..." This establishes AWS Config as a detective control, which is less suitable than the preventative control required by the question.
5. Amazon EMR Management Guide, "Use Kerberos for authentication with Amazon EMR": "With Amazon EMR, you can provision a Kerberized cluster that integrates with the cluster's Hadoop applications, such as YARN, HDFS, and Hive, to provide authentication between hosts and services in your cluster." This reference clarifies that Kerberos is for in-cluster authentication, not AWS resource provisioning governance.
Question 27
Show Answer
D. VPC peering creates a full mesh of connections that is unmanageable and operationally expensive for hundreds of VPCs, violating the "least operational effort" requirement.
E. This statement is technically inaccurate. Attachments are configured to connect VPCs and VPNs to a Transit Gateway, not directly between each other.
F. While VPC route tables must be updated to point to the Transit Gateway, the central control for inter-VPC and on-premises traffic is managed by the Transit Gateway route tables, as stated in option C.
1. AWS Documentation: AWS Transit Gateway. "A transit gateway is a network transit hub that you can use to interconnect your virtual private clouds (VPCs) and on-premises networks... You can share your transit gateway with other AWS accounts using AWS Resource Access Manager (AWS RAM)." (Source: AWS Transit Gateway User Guide, "What is a transit gateway?")
2. AWS Documentation: Transit gateway attachments. "To use your transit gateway, you must create an attachment for your network resources... You can create the following attachments: VPC... VPN" (Source: AWS Transit Gateway User Guide, "Transit gateway attachments")
3. AWS Documentation: Transit gateway route tables. "A transit gateway has a default route table and can optionally have additional route tables. A route table inside a transit gateway determines the next-hop for the packet... By default, the VPCs and VPN connections are associated with the default transit gateway route table." To implement custom routing and isolation, you create separate route tables and manage associations. (Source: AWS Transit Gateway User Guide, "Routing")
4. AWS Whitepaper: Building a Scalable and Secure Multi-VPC AWS Network Infrastructure. This paper discusses the limitations of VPC peering at scale and presents AWS Transit Gateway as the recommended solution for a hub-and-spoke network topology, highlighting its scalability and centralized routing control. (See section: "AWS Transit Gateway")
Question 28
Show Answer
A. A Volume Gateway provides block storage to an on-premises server; users would still connect to the on-premises data center, not solving the bandwidth issue.
C. Amazon FSx for Lustre is a high-performance file system for workloads like HPC and is not the appropriate service for general-purpose Windows home directories.
E. AWS Direct Connect establishes a dedicated connection between the on-premises data center and AWS but does not solve the ingress bandwidth bottleneck from remote users.
1. Amazon FSx for Windows File Server Documentation: "Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Server Message Block (SMB) protocol. ... Common use cases include home directories, user shares, and application file shares."
Source: AWS Documentation, "What is Amazon FSx for Windows File Server?", Introduction.
2. AWS Client VPN Documentation: "AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources and resources in your on-premises network. With Client VPN, you can access your resources from any location using an OpenVPN-based VPN client."
Source: AWS Documentation, "What is AWS Client VPN?", Introduction.
3. Comparing Amazon FSx for Windows File Server and Amazon FSx for Lustre: "FSx for Windows File Server is designed for a broad set of Windows-based applications and workloads... FSx for Lustre is designed for speed and is ideal for high-performance computing (HPC), machine learning, and media data processing workflows."
Source: AWS Documentation, "Amazon FSx - FAQs", "When should I use Amazon FSx for Windows File Server vs. Amazon FSx for Lustre?".
4. AWS Storage Gateway (Volume Gateway) Documentation: "A volume gateway represents the family of gateways that support block-based volumes, previously referred to as gateway-cached and gateway-stored volumes. ... You can back up your local data to the volumes in AWS." This confirms it provides block storage to on-premises applications, not a direct file access solution for remote users.
Source: AWS Documentation, "How Volume Gateway works".
Question 29
Show Answer
B. AWS Trusted Advisor provides cost optimization recommendations by identifying unused or idle resources; it does not track spending against a predefined budget threshold for alerting.
C. AWS Control Tower guardrails are for enforcing governance policies and detecting non-compliant resources, not for monitoring and alerting on spending against a specific budget amount.
D. This is an overly complex and expensive solution. It requires building and maintaining a custom data query and alerting pipeline, whereas AWS Budgets provides this functionality as a managed service.
1. AWS Budgets User Guide, "Managing your costs with AWS Budgets": This document states, "You can use AWS Budgets to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount." This directly supports the use of AWS Budgets for the required alerting.
2. AWS Budgets User Guide, "Creating a cost budget": This section details the process of setting up a budget, specifying the period (e.g., Daily), and setting the budgeted amount. It also describes how to "Configure alerts" based on actual costs reaching a percentage of the budget.
3. AWS Cost Management User Guide, "Monitoring your usage and costs": This guide explains that the management account in an organization can use AWS Cost Management features, including AWS Budgets, to view the combined costs of all accounts.
4. AWS Trusted Advisor Documentation, "AWS Trusted Advisor check reference": The "Cost Optimization" section lists checks like "Amazon RDS Idle DB Instances" and "Low Utilization Amazon EC2 Instances," confirming its role is to provide recommendations, not budget-based alerting.
5. AWS Control Tower User Guide, "How guardrails work": This document describes guardrails as "pre-packaged governance rules for security, operations, and cost management." Their function is policy enforcement, not dynamic budget tracking and alerting.
Question 30
Show Answer
A. Provisioning and managing DNS servers on EC2 instances requires manual setup, patching, scaling, and high-availability configuration, which constitutes significant administrative overhead.
B. An Amazon Route 53 private hosted zone is used when Route 53 is the authoritative DNS service for a domain within a VPC. It is not used for forwarding queries to an external resolver.
D. Deploying new Active Directory domain controllers and configuring a trust relationship is a complex infrastructure task that is excessive for solving only a DNS resolution requirement.
1. AWS Documentation - What is Amazon Route 53 Resolver?: "With Resolver, you can set up rules to conditionally forward requests to DNS resolvers on your remote network... This functionality lets you resolve DNS names for resources in your on-premises data center." This directly supports the use of conditional forwarding for the described scenario.
Source: AWS Documentation, Amazon Route 53 Developer Guide, "What is Amazon Route 53 Resolver?".
2. AWS Documentation - Resolving DNS queries between VPCs and your network: "To forward DNS queries from your VPCs to your network... you create a Route 53 Resolver outbound endpoint and a forwarding rule." This outlines the exact components described in the correct answer.
Source: AWS Documentation, Amazon Route 53 Developer Guide, "Resolving DNS queries between VPCs and your network", Section: "Forwarding outbound DNS queries to your network".
3. AWS Documentation - Simplifying DNS management in a hybrid cloud with Amazon Route 53 Resolver: This whitepaper explains the architecture: "For outbound DNS queries (from VPC to on-premises), you create a Route 53 Resolver outbound endpoint... You then create a rule that specifies the domain name for the DNS queries that you want to forward... and the IP addresses of the DNS resolvers in your on-premises network."
Source: AWS Whitepaper, Simplifying DNS management in a hybrid cloud with Amazon Route 53 Resolver, Page 5.
4. AWS Documentation - Working with private hosted zones: "A private hosted zone is a container that holds information about how you want to route traffic for a domain and its subdomains within one or more Amazon Virtual Private Clouds (Amazon VPCs)." This confirms that its purpose is authoritative resolution within a VPC, not forwarding.
Source: AWS Documentation, Amazon Route 53 Developer Guide, "Working with private hosted zones".
Question 31
Show Answer
A. A partition placement group spreads instances across different racks to reduce correlated failures, which does not provide the lowest possible network latency required for this workload.
B. This option is incorrect because compute-optimized instances are not the best fit for an in-memory database, and a partition placement group prioritizes fault tolerance over lowest latency.
D. A spread placement group places each instance on distinct hardware, maximizing separation for high availability, which is the opposite of the low-latency, co-location requirement.
1. AWS Documentation, EC2 User Guide for Linux Instances, "Placement groups": It states, "A cluster placement group is a logical grouping of instances within a single Availability Zone... This strategy enables workloads to achieve the low-latency, high-throughput network performance required for tightly-coupled, node-to-node communication typical of HPC applications." This directly supports using a cluster placement group for the lowest latency.
2. AWS Documentation, "Amazon EC2 Instance Types": The documentation for Memory Optimized instances (e.g., R, X families) states, "Memory optimized instances are designed to deliver fast performance for workloads that process large data sets in memory." This validates the choice of memory-optimized instances for an in-memory database.
3. AWS Well-Architected Framework, Performance Efficiency Pillar whitepaper (July 2023), Page 29, "PERF 5: How do you select your compute solution?": Under the "Networking characteristics" section, the whitepaper advises, "For workloads that require low network latency, high network throughput, or both, between nodes, you can use cluster placement groups." This confirms the best practice for the scenario's performance requirement.
Question 32
Show Answer
B: Amazon SQS is a message queuing service, not a real-time data streaming service. Kinesis Data Streams is the purpose-built service for this high-throughput, ordered streaming use case.
D: This is incorrect because CloudWatch Logs subscription filters cannot send data directly to an SQS queue. Furthermore, a subscription filter must be created for each log group individually.
F: The requirement is to process and normalize logs in the central logging account. This option incorrectly proposes performing this function in the individual member accounts.
1. AWS Documentation, Amazon CloudWatch Logs User Guide: "Real-time processing of log data with subscriptions." This document explicitly states, "You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it delivered to other services such as an Amazon Kinesis stream... for custom processing, analysis, or loading to other systems." It details the use of Kinesis as a destination.
2. AWS Documentation, Amazon CloudWatch Logs User Guide: "Cross-account log data sharing with subscriptions." This section provides a step-by-step guide for the required setup. It specifies creating a destination Kinesis stream in the receiving account (Step 1) and creating an IAM role in the receiving account that grants the sending account permission to put data into the stream (Step 2).
3. AWS Documentation, AWS Lambda Developer Guide: "Using AWS Lambda with Amazon Kinesis." This guide describes the common event-driven pattern where "Lambda polls the stream periodically... and when it detects new records, it invokes your Lambda function by passing the new records as a payload." This confirms Lambda's role as a scalable processor for Kinesis streams.
4. AWS Documentation, Amazon Kinesis Data Streams Developer Guide: "What Is Amazon Kinesis Data Streams?" The introduction states, "Amazon Kinesis Data Streams is a scalable and durable real-time data streaming service... You can use Kinesis Data Streams for rapid and continuous data intake and aggregation." This validates its suitability for the high-volume, variable load requirement.
Question 33
Show Answer
A. EC2 Instance Savings Plans are too restrictive; they commit to a specific instance family and region, which violates the requirement for operational flexibility.
B. This option is incorrect for two reasons: Service Control Policies (SCPs) enforce tagging policies but do not apply tags, and EC2 Instance Savings Plans lack the required flexibility.
D. Service Control Policies (SCPs) are used to enforce permissions and constraints (e.g., requiring a tag), not to apply tags to resources directly.
---
1. AWS Savings Plans User Guide, "Overview of Savings Plans": This document explicitly contrasts the two types of Savings Plans. It states, "Compute Savings Plans provide the most flexibility... These plans automatically apply to EC2 instance usage regardless of instance family, size, AZ, Region, OS or tenancy... EC2 Instance Savings Plans... provide the lowest prices... in exchange for commitment to a specific instance family in a chosen Region." This directly supports the choice of Compute Savings Plans for flexibility.
2. AWS Billing and Cost Management User Guide, "Using cost allocation tags": This guide explains, "After you activate cost allocation tags, AWS uses the tags to organize your resource costs on your cost allocation report, making it easier for you to categorize and track your AWS costs." This validates the use of a tagging strategy for departmental cost visibility.
3. AWS Resource Groups and Tag Editor User Guide, "What Is Tag Editor?": This documentation describes the tool's function: "With Tag Editor, you can add, edit, or delete tags for multiple AWS resources at once." This confirms Tag Editor is an appropriate tool for implementing the tagging strategy.
4. AWS Organizations User Guide, "Service control policies (SCPs)": The documentation clarifies the function of SCPs: "SCPs are a type of organization policy that you can use to manage permissions in your organization... SCPs don't grant permissions." This confirms that SCPs cannot be used to apply tags, only to enforce policies that might require them.
5. AWS Well-Architected Framework, "Cost Optimization Pillar" (Whitepaper, Page 21): The whitepaper discusses purchasing options, stating, "Compute Savings Plans provide the most flexibility and help to reduce your costs... This automatically applies to any EC2 instance usage regardless of instance family, size, AZ, Region, OS or tenancy." This reinforces that Compute Savings Plans are the best practice for flexible cost reduction.
Question 34
Show Answer
A. IAM groups are containers for IAM users, not roles. Creating a group in a member account does not establish a trust relationship for the management account to access it.
B. An IAM policy defines permissions but does not grant them on its own. It must be attached to an identity (like a role) to be effective for cross-account access.
D. The OrganizationAccountAccessRole must be created in the account that needs to be managed (the member account), not in the account that is performing the management (the management account).
1. AWS Organizations User Guide: In the section on "Accessing and administering the member accounts in your organization," the guide specifies the process for invited accounts. It states, "When you invite an existing account to join your organization, AWS does not automatically create the OrganizationAccountAccessRole IAM role in the account. You must manually create the role..." This document details the steps, which include creating the role in the member account and establishing a trust policy for the management account. (See: AWS Organizations User Guide, section "Creating the OrganizationAccountAccessRole in an invited member account").
2. AWS Identity and Access Management (IAM) User Guide: The guide explains the fundamental mechanism for cross-account access. "You can use roles to delegate access to users or services that normally don't have access to your AWS resources... In this scenario, the account that owns the resources is the trusting account [member account] and the account that contains the users is the trusted account [management account]." This confirms the role must be in the member account. (See: AWS IAM User Guide, section "How to use an IAM role to delegate access across AWS accounts").
3. AWS Whitepaper - AWS Multiple Account Security Strategy: This whitepaper discusses best practices for multi-account environments. It reinforces the use of cross-account roles for centralized access and administration, stating, "To enable cross-account access, you create roles in the accounts you want to access (member accounts) and grant IAM principals in the accounts you want to grant access from (management or delegated administrator accounts) permissions to assume those roles." (See: AWS Multiple Account Security Strategy, section "Centralized access management").
Question 35
Show Answer
A: AWS CodeCommit is a source control service, not a package repository. Manually synchronizing packages would be operationally complex and is not the intended use of the service.
B: A NAT gateway provides general outbound internet access, which directly violates the requirement that the SageMaker instances remain isolated from the internet.
C: A NAT instance, like a NAT gateway, provides a route to the public internet, which contradicts the core security requirement of keeping the instances isolated.
1. AWS CodeArtifact User Guide, "Connect a CodeArtifact repository to a public repository": This document explains how to configure a CodeArtifact repository with an external connection to public repositories such as PyPI. It states, "When you connect a CodeArtifact repository to a public repository... CodeArtifact can fetch packages from the public repository on demand."
2. AWS CodeArtifact User Guide, "Using CodeArtifact with VPC endpoints": This section details how to use interface VPC endpoints to connect directly to CodeArtifact from within a VPC without traversing the internet. It notes, "You can improve the security of your build and deployment processes by configuring AWS PrivateLink for CodeArtifact. By creating interface VPC endpoints, you can connect to CodeArtifact from your VPC without sending traffic over the public internet."
3. AWS Documentation, "VPC endpoints": This documentation clarifies the purpose of interface VPC endpoints. "An interface endpoint is an elastic network interface with a private IP address that serves as an entry point for traffic destined to a supported AWS service or a VPC endpoint service." This confirms that traffic stays on the AWS network.
4. Amazon SageMaker Developer Guide, "Connect to SageMaker Through a VPC Interface Endpoint": This guide describes the pattern of using VPC endpoints to allow SageMaker resources in a private VPC to access AWS services without internet access, which is the same architectural pattern proposed in the correct answer for accessing CodeArtifact.