Free Practice Test

Free SAP-C02 Exam Questions – 2025 Updated

Get Ready Smarter for the SAP-C02 Exam with Our Free and Accurate SAP-C02 Exam Questions – Updated for 2025.

At Cert Empire, we are dedicated to providing the latest and most reliable exam questions for students preparing for the Amazon SAP-C02 Exam. To make studying easier, we’ve made sections of our SAP-C02 exam resources free for everyone. You can practice as much as you like with Free SAP-C02 Practice Test.

Question 1

A financial services company sells its software-as-a-service (SaaS) platform for application compliance to large global banks. The SaaS platform runs on AWS and uses multiple AWS accounts that are managed in an organization in AWS Organizations. The SaaS platform uses many AWS resources globally. For regulatory compliance, all API calls to AWS resources must be audited, tracked for changes, and stored in a durable and secure data store. Which solution will meet these requirements with the LEAST operational overhead?
Options
A: Create a new AWS CloudTrail trail. Use an existing Amazon S3 bucket in the organization's management account to store the logs. Deploy the trail to all AWS Regions. Enable MFA delete and encryption on the S3 bucket.
B: Create a new AWS CloudTrail trail in each member account of the organization. Create new Amazon S3 buckets to store the logs. Deploy the trail to all AWS Regions. Enable MFA delete and encryption on the S3 buckets.
C: Create a new AWS CloudTrail trail in the organization's management account. Create a new Amazon S3 bucket with versioning turned on to store the logs. Deploy the trail for all accounts in the organization. Enable MFA delete and encryption on the S3 bucket.
D: Create a new AWS CloudTrail trail in the organization's management account. Create a new Amazon S3 bucket to store the logs. Configure Amazon Simple Notification Service (Amazon SNS) to send log-file delivery notifications to an external management system that will track the logs. Enable MFA delete and encryption on the S3 bucket.
Show Answer
Correct Answer:
Create a new AWS CloudTrail trail in the organization's management account. Create a new Amazon S3 bucket with versioning turned on to store the logs. Deploy the trail for all accounts in the organization. Enable MFA delete and encryption on the S3 bucket.
Explanation
This solution meets all requirements with the least operational overhead. Creating an AWS CloudTrail trail in the management account and applying it to the entire organization (an "organization trail") centralizes logging configuration and management. This single trail captures API events from all member accounts, which is the most efficient approach. Storing logs in a new, dedicated Amazon S3 bucket is a security best practice. Enabling S3 Versioning directly addresses the requirement to track changes by preserving a complete history of log files, protecting against overwrites. Finally, enabling encryption and MFA delete on the S3 bucket ensures the logs are stored securely and durably, meeting regulatory compliance standards.
Why Incorrect Options are Wrong

A. A standard CloudTrail trail created in the management account will only log API calls for that single account, not for all member accounts in the organization.

B. Creating and managing a separate CloudTrail trail and S3 bucket in each member account creates maximum operational overhead, directly contradicting a key requirement of the question.

D. This option introduces unnecessary complexity with Amazon SNS and an external system. S3 Versioning is a simpler, built-in mechanism to track changes, resulting in lower operational overhead.

References

1. AWS CloudTrail User Guide, "Creating a trail for an organization": This document states, "You can create a trail in the management account that logs events for all AWS accounts in that organization. This is sometimes called an organization trail." This supports creating a single trail in the management account for minimal overhead.

2. AWS Organizations User Guide, "Enabling AWS CloudTrail in your organization": "When you create an organization trail, a trail with the name that you choose is created in every AWS account that belongs to your organization. This trail logs the activity from each account and delivers the log files to the Amazon S3 bucket that you specify." This confirms the centralized management and logging for all accounts.

3. Amazon S3 User Guide, "Using versioning in S3 buckets": "Versioning is a means of keeping multiple variants of an object in the same bucket. You can use the S3 Versioning feature to preserve, retrieve, and restore every version of every object stored in your buckets." This directly addresses the requirement to track changes.

4. Amazon S3 User Guide, "Configuring MFA delete": "To provide an additional layer of security, you can configure a bucket to require multi-factor authentication (MFA) for any request to permanently delete an object version or change the versioning state of the bucket." This supports the security requirement.

Question 2

A company is migrating an application to the AWS Cloud. The application runs in an on-premises data center and writes thousands of images into a mounted NFS file system each night. After the company migrates the application, the company will host the application on an Amazon EC2 instance with a mounted Amazon Elastic File System (Amazon EFS) file system. The company has established an AWS Direct Connect connection to AWS. Before the migration cutover, a solutions architect must build a process that will replicate the newly created on-premises images to the EFS file system. What is the MOST operationally efficient way to replicate the images?
Options
A: Configure a periodic process to run the aws s3 sync command from the on-premises file system to Amazon S3. Configure an AWS Lambda function to process event notifications from Amazon S3 and copy the images from Amazon S3 to the EFS file system.
B: Deploy an AWS Storage Gateway file gateway with an NFS mount point. Mount the file gateway file system on the on-premises server. Configure a process to periodically copy the images to the mount point.
C: Deploy an AWS DataSync agent to an on-premises server that has access to the NFS file system. Send data over the Direct Connect connection to an S3 bucket by using public VIF. Configure an AWS Lambda function to process event notifications from Amazon S3 and copy the images from Amazon S3 to the EFS file system.
D: Deploy an AWS DataSync agent to an on-premises server that has access to the NFS file system. Send data over the Direct Connect connection to an AWS PrivateLink int
Show Answer
Correct Answer:
Deploy an AWS DataSync agent to an on-premises server that has access to the NFS file system. Send data over the Direct Connect connection to an AWS PrivateLink int
Explanation
AWS DataSync is a purpose-built, managed service designed to simplify and accelerate data transfers between on-premises storage systems and AWS storage services. It can directly transfer data from an on-premises NFS file system to an Amazon EFS file system. This approach provides a single, fully managed solution that automates data transfer, including scheduling, encryption, data integrity validation, and network optimization. By using DataSync over the existing Direct Connect connection with a private VIF and VPC endpoints, the company achieves a secure, high-performance, and operationally efficient replication process without needing intermediate storage like Amazon S3 or custom-coded solutions like AWS Lambda functions.
Why Incorrect Options are Wrong

A. This option introduces unnecessary complexity and operational overhead by requiring a multi-step process (NFS -> S3 -> EFS) and custom logic (Lambda function) instead of a single, managed service.

B. AWS Storage Gateway (File Gateway) primarily provides on-premises applications with file-based access to Amazon S3. It does not directly replicate data to EFS, making it an indirect and inefficient solution for this use case.

C. While this option uses DataSync, it directs the data to S3 first, requiring a second step with a Lambda function to move it to EFS. A direct DataSync transfer to EFS is far more operationally efficient.

References

1. AWS DataSync User Guide, "What is AWS DataSync?": This document explicitly states that DataSync is an online data transfer service that automates moving data between on-premises storage systems (like NFS) and AWS Storage services (like Amazon EFS). It highlights features like end-to-end security and data integrity, which contribute to operational efficiency.

Source: AWS Documentation, https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html, Section: "What is AWS DataSync?".

2. AWS DataSync User Guide, "Creating a location for Amazon EFS": This guide provides instructions for configuring an Amazon EFS file system as a destination location for a DataSync task, confirming the direct transfer capability from a source like NFS.

Source: AWS Documentation, https://docs.aws.amazon.com/datasync/latest/userguide/create-efs-location.html, Introduction section.

3. AWS DataSync User Guide, "Using AWS DataSync with AWS Direct Connect": This section details how to use DataSync over a Direct Connect connection. It recommends using a private virtual interface (VIF) and VPC endpoints for private, secure data transfer, which aligns with the most efficient and secure architecture.

Source: AWS Documentation, https://docs.aws.amazon.com/datasync/latest/userguide/datasync-direct-connect.html, Introduction section.

4. AWS Storage Blog, "Migrating storage with AWS DataSync": This official blog post describes common migration patterns and explicitly mentions the capability of DataSync to copy data between NFS shares and Amazon EFS file systems as a primary use case, reinforcing its suitability and efficiency for this scenario.

Source: AWS Blogs, https://aws.amazon.com/blogs/storage/migrating-storage-with-aws-datasync/, Paragraph 2.

Question 3

A company runs its application on Amazon EC2 instances and AWS Lambda functions. The EC2 instances experience a continuous and stable load. The Lambda functions experience a varied and unpredictable load. The application includes a caching layer that uses an Amazon MemoryDB for Redis cluster. A solutions architect must recommend a solution to minimize the company's overall monthly costs. Which solution will meet these requirements?
Options
A: Purchase an EC2 Instance Savings Plan to cover the EC2 instances. Purchase a Compute Savings Plan for Lambda to cover the minimum expectedconsumption of the Lambda functions. Purchase reserved nodes to cover the MemoryDB cache nodes.
B: Purchase a Compute Savings Plan to cover the EC2 instances. Purchase Lambda reserved concurrency to cover the expected Lambda usage. Purchasereserved nodes to cover the MemoryDB cache nodes.
C: Purchase a Compute Savings Plan to cover the entire expected cost of the EC2 instances, Lambda functions, and MemoryDB cache nodes.
D: Purchase a Compute Savings Plan to cover the EC2 instances and the MemoryDB cache nodes. Purchase Lambda reserved concurrency to cover theexpected Lambda usage.
Show Answer
Correct Answer:
Purchase an EC2 Instance Savings Plan to cover the EC2 instances. Purchase a Compute Savings Plan for Lambda to cover the minimum expectedconsumption of the Lambda functions. Purchase reserved nodes to cover the MemoryDB cache nodes.
Explanation
This solution correctly applies the most effective cost-saving mechanism for each AWS service based on the described usage patterns. For the continuous and stable EC2 load, an EC2 Instance Savings Plan provides the highest discount by committing to a specific instance family in a region. For the varied and unpredictable Lambda load, a Compute Savings Plan offers flexibility and provides discounts on compute usage (including Lambda) for a committed hourly spend. For the MemoryDB caching layer, purchasing reserved nodes is the designated method to receive a significant discount over on-demand pricing by committing to a one- or three-year term. This combination maximizes savings across all three services.
Why Incorrect Options are Wrong

B: Lambda reserved concurrency is a feature for guaranteeing execution environments and preventing throttling; it is not a cost-saving mechanism and does not provide a discount on usage.

C: Compute Savings Plans do not apply to Amazon MemoryDB for Redis. MemoryDB has its own pricing model using reserved nodes for discounts, separate from the Savings Plans for compute services.

D: This option is incorrect for two reasons: Compute Savings Plans do not cover MemoryDB cache nodes, and Lambda reserved concurrency is not a cost-saving feature.

References

1. AWS Documentation - Savings Plans User Guide: "Savings Plans offer a flexible pricing model that provides savings on AWS usage. You can save up to 72 percent on your AWS compute workloads... EC2 Instance Savings Plans provide the lowest prices, offering savings up to 72 percent in exchange for commitment to a specific instance family in a specific Region... Compute Savings Plans provide flexibility and help to reduce your costs by up to 66 percent... This automatically applies to EC2 instance usage regardless of instance family, size, AZ, Region, OS or tenancy, and also applies to Fargate or Lambda usage." This confirms EC2 Instance SP for highest EC2 savings and Compute SP for Lambda.

2. AWS Documentation - Amazon MemoryDB for Redis Pricing: The official pricing page states, "With MemoryDB reserved nodes, you can save up to 55 percent over On-Demand node prices in exchange for a commitment to a one- or three-year term." This identifies reserved nodes as the correct cost-saving model for MemoryDB.

3. AWS Documentation - Lambda Developer Guide, "Configuring reserved concurrency": "Reserved concurrency creates a pool of requests that only a specific function can use... Reserving concurrency has the following effects... It is not a cost-saving feature." This explicitly states that reserved concurrency is for performance and availability, not for reducing costs.

Question 4

A company needs to monitor a growing number of Amazon S3 buckets across two AWS Regions. The company also needs to track the percentage of objects that are encrypted in Amazon S3. The company needs a dashboard to display this information for internal compliance teams. Which solution will meet these requirements with the LEAST operational overhead?
Options
A: Create a new S3 Storage Lens dashboard in each Region to track bucket and encryption metrics. Aggregate data from both Region dashboards into a singledashboard in Amazon QuickSight for the compliance teams.
B: Deploy an AWS Lambda function in each Region to list the number of buckets and the encryption status of objects. Store this data in Amazon S3. Use AmazonAthena queries to display the data on a custom dashboard in Amazon QuickSight for the compliance teams.
C: Use the S3 Storage Lens default dashboard to track bucket and encryption metrics. Give the compliance teams access to the dashboard directly in the S3console.
D: Create an Amazon EventBridge rule to detect AWS Cloud Trail events for S3 object creation. Configure the rule to invoke an AWS Lambda function to recordencryption metrics in Amazon DynamoDB. Use Amazon QuickSight to display the metrics in a dashboard for the compliance teams.
Show Answer
Correct Answer:
Use the S3 Storage Lens default dashboard to track bucket and encryption metrics. Give the compliance teams access to the dashboard directly in the S3console.
Explanation
Amazon S3 Storage Lens is a purpose-built analytics feature that provides organization-wide visibility into object storage usage and activity. The default S3 Storage Lens dashboard is automatically created at the account level, aggregating metrics from all AWS Regions. This dashboard includes key metrics such as total bucket count and the percentage of unencrypted objects, directly fulfilling the company's monitoring and compliance requirements. Providing the compliance team with IAM access to view this dashboard in the S3 console is the most direct approach, involving no custom development, data pipelines, or integration of multiple services, thereby representing the solution with the least operational overhead.
Why Incorrect Options are Wrong

A. Creating new dashboards per region and aggregating in QuickSight is redundant and adds unnecessary operational overhead, as the default S3 Storage Lens dashboard already aggregates data across all regions.

B. This custom solution using Lambda, S3, and Athena requires significant development, deployment, and maintenance effort, which is the opposite of "least operational overhead" compared to a managed service.

D. An event-driven approach with CloudTrail, EventBridge, and Lambda is complex to set up and maintain. It primarily tracks new events, making it less suitable for a comprehensive, periodic overview of all existing objects.

References

1. AWS Documentation: Amazon S3 User Guide - Amazon S3 Storage Lens.

Section: "What is Amazon S3 Storage Lens?" states, "S3 Storage Lens aggregates your metrics and displays the information in the Dashboards section of the Amazon S3 console."

Section: "S3 Storage Lens dashboards" explains, "S3 Storage Lens provides a default dashboard that is named default-account-dashboard. This dashboard is preconfigured by S3 to help you visualize summarized storage usage and activity trends across your entire account." This confirms it is multi-region by default.

2. AWS Documentation: Amazon S3 User Guide - S3 Storage Lens metrics glossary.

Section: "Data protection metrics" lists UnencryptedObjectCount and TotalObjectCount, which are used to calculate the percentage of unencrypted objects displayed on the dashboard.

Section: "Storage summary metrics" lists BucketCount, confirming this metric is available.

3. AWS Documentation: Amazon S3 User Guide - Using the S3 Storage Lens default dashboard.

This section details that the default dashboard is available at no additional cost and is updated daily, reinforcing the low operational overhead. It states, "The default dashboard is automatically created for you when you first visit the S3 Storage Lens dashboards page in the Amazon S3 console."

Question 5

A company is planning to migrate an application to AWS. The application runs as a Docker container and uses an NFS version 4 file share. A solutions architect must design a secure and scalable containerized solution that does not require provisioning or management of the underlying infrastructure. Which solution will meet these requirements?
Options
A: Deploy the application containers by using Amazon Elastic Container Service (Amazon ECS) with the Fargate launch type. Use Amazon Elastic File System (Amazon EFS) for shared storage. Reference the EFS file system ID, container mount point, and EFS authorization IAM role in the ECS task definition.
B: Deploy the application containers by using Amazon Elastic Container Service (Amazon ECS) with the Fargate launch type. Use Amazon FSx for Lustre for shared storage. Reference the FSx for Lustre file system ID, container mount point, and FSx for Lustre authorization IAM role in the ECS task definition.
C: Deploy the application containers by using Amazon Elastic Container Service (Amazon ECS) with the Amazon EC2 launch type and auto scaling turned on. Use Amazon Elastic File System (Amazon EFS) for shared storage. Mount the EFS file system on the ECS container instances. Add the EFS authorization IAM role to the EC2 instance profile.
D: Deploy the application containers by using Amazon Elastic Container Service (Amazon ECS) with the Amazon EC2 launch type and auto scaling turned on. Use Amazon Elastic Block Store (Amazon EBS) volumes with Multi-Attach enabled for shared storage. Attach the EBS volumes to ECS container instances. Add the EBS authorization IAM role to an EC2 instance profile.
Show Answer
Correct Answer:
Deploy the application containers by using Amazon Elastic Container Service (Amazon ECS) with the Fargate launch type. Use Amazon Elastic File System (Amazon EFS) for shared storage. Reference the EFS file system ID, container mount point, and EFS authorization IAM role in the ECS task definition.
Explanation
The solution requires a serverless container platform and shared storage compatible with NFS. AWS Fargate is a serverless compute engine for containers that allows you to run Amazon ECS tasks without managing the underlying EC2 instances, fulfilling the serverless requirement. Amazon EFS is a fully managed, scalable file storage service that uses the NFSv4 protocol, directly matching the application's existing dependency. ECS tasks running on Fargate can mount EFS file systems by referencing the file system ID and mount point within the task definition, providing persistent, shared storage for the containers. This combination securely meets all the specified requirements.
Why Incorrect Options are Wrong

B: Amazon FSx for Lustre is a high-performance file system designed for workloads like HPC and machine learning, not for general-purpose NFS applications. EFS is the more appropriate service.

C: This option uses the Amazon EC2 launch type for ECS, which violates the requirement to not provision or manage underlying infrastructure. The user is responsible for the EC2 container instances.

D: This uses the EC2 launch type, which is not serverless. Additionally, EBS Multi-Attach provides shared block storage, not a file system like NFS, and requires a cluster-aware file system to manage access.

References

1. AWS Fargate Documentation, "What is AWS Fargate?": "AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS)." This supports the serverless compute requirement.

2. Amazon ECS Developer Guide, "Amazon EFS volumes": "With Amazon EFS, the storage capacity is elastic... Your Amazon ECS tasks running on both Fargate and Amazon EC2 instances can use EFS. ... To use Amazon EFS volumes with your containers, you must define the volume and mount point in your task definition." This confirms the integration method described in option A.

3. Amazon Elastic File System User Guide, "What is Amazon Elastic File System?": "Amazon Elastic File System (Amazon EFS) provides a simple, serverless, set-and-forget, elastic file system... It is built to scale on demand to petabytes without disrupting applications... It supports the Network File System version 4 (NFSv4.1 and NFSv4.0) protocol." This confirms EFS as the correct NFS-compatible storage solution.

4. Amazon FSx for Lustre User Guide, "What is Amazon FSx for Lustre?": "Amazon FSx for Lustre is a fully managed service that provides cost-effective, high-performance, scalable storage for compute workloads. ... The high-performance file system is optimized for workloads such as machine learning, high performance computing (HPC)..." This distinguishes its use case from the general-purpose need in the question.

Question 6

A scientific company needs to process text and image data from an Amazon S3 bucket. The data is collected from several radar stations during a live, time-critical phase of a deep space mission. The radar stations upload the data to the source S3 bucket. The data is prefixed by radar station identification number. The company created a destination S3 bucket in a second account. Data must be copied from the source S3 bucket to the destination S3 bucket to meet a compliance objective. The replication occurs through the use of an S3 replication rule to cover all objects in the source S3 bucket. One specific radar station is identified as having the most accurate dat a. Data replication at this radar station must be monitored for completion within 30 minutes after the radar station uploads the objects to the source S3 bucket. What should a solutions architect do to meet these requirements?
Options
A: Set up an AWS DataSync agent to replicate the prefixed data from the source S3 bucket to the destination S3 bucket. Select to use all available bandwidth on the task, and monitor the task to ensure that it is in the TRANSFERRING status. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert if this status changes.
B: In the second account, create another S3 bucket to receive data from the radar station with the most accurate data. Set up a new replication rule for this new S3 bucket to separate the replication from the other radar stations. Monitor the maximum replication time to the destination. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert when the time exceeds the desired threshold.
C: Enable Amazon S3 Transfer Acceleration on the source S3 bucket, and configure the radar station with the most accurate data to use the new endpoint. Monitor the S3 destination bucket's TotalRequestLatency metric. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert if this status changes.
D: Create a new S3 replication rule on the source S3 bucket that filters for the keys that use the prefix of the radar station with the most accurate data. Enable S3 Replication Time Control (S3 RTC). Monitor the maximum replication time to the destination. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert when the time exceeds the desired threshold.
Show Answer
Correct Answer:
Create a new S3 replication rule on the source S3 bucket that filters for the keys that use the prefix of the radar station with the most accurate data. Enable S3 Replication Time Control (S3 RTC). Monitor the maximum replication time to the destination. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert when the time exceeds the desired threshold.
Explanation
The most effective solution is to leverage S3 Replication Time Control (S3 RTC), a feature specifically designed for predictable, time-bound replication. By creating a new, more specific replication rule that filters on the prefix of the critical radar station and enabling S3 RTC, the company can meet the 30-minute replication requirement, which is well within the 15-minute Service Level Agreement (SLA) provided by S3 RTC. S3 RTC also provides replication metrics that can be monitored in Amazon CloudWatch. An Amazon EventBridge rule can be configured to watch for S3 replication events (e.g., s3:Replication:OperationFailedReplication) or CloudWatch metrics (e.g., ReplicationLatency) and trigger an alert if the replication time exceeds the desired threshold, fulfilling the monitoring requirement.
Why Incorrect Options are Wrong

A. AWS DataSync is a data transfer service, not the native S3 replication feature already in use. Introducing it would be an unnecessary architectural change and is not the intended tool for this specific use case.

B. Replication rules are configured on the source bucket. Creating a new destination bucket does not simplify or solve the problem of monitoring a subset of objects from the single source bucket.

C. Amazon S3 Transfer Acceleration speeds up object uploads to an S3 bucket from clients over the public internet, not the replication process between S3 buckets within the AWS network.

References

1. Amazon S3 Developer Guide - Replicating objects using S3 Replication Time Control (S3 RTC): "S3 Replication Time Control (S3 RTC) helps you meet compliance or business requirements for data replication by providing a predictable replication time. S3 RTC replicates 99.99 percent of new objects stored in Amazon S3 within 15 minutes of upload." This document also details how to enable S3 RTC in a replication rule.

2. Amazon S3 Developer Guide - Replication configuration: This section explains how to create replication rules and specifies that a rule can apply to all objects or a subset. "To select a subset of objects, you can specify a key name prefix, one or more object tags, or both in the rule." This supports the use of a prefix-based filter.

3. Amazon S3 Developer Guide - Monitoring replication with Amazon S3 event notifications: "You can use Amazon S3 event notifications to receive notifications for S3 Replication Time Control (S3 RTC) events... For example, you can set up an event notification for the s3:Replication:OperationMissedThreshold event to be notified when an object eligible for S3 RTC replication doesn't replicate in 15 minutes." This confirms the monitoring and alerting capability via EventBridge.

4. Amazon S3 Developer Guide - Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration: "Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket." This clarifies that its purpose is for client-to-bucket transfers, not inter-bucket replication.

Question 7

A company is migrating a legacy application from an on-premises data center to AWS. The application consists of a single application server and a Microsoft SQL Server database server. Each server is deployed on a VMware VM that consumes 500 TB of data across multiple attached volumes. The company has established a 10 Gbps AWS Direct Connect connection from the closest AWS Region to its on-premises data center. The Direct Connect connection is not currently in use by other services. Which combination of steps should a solutions architect take to migrate the application with the LEAST amount of downtime? (Choose two.)
Options
A: Use an AWS Server Migration Service (AWS SMS) replication job to migrate the database server VM to AWS.
B: Use VM Import/Export to import the application server VM.
C: Export the VM images to an AWS Snowball Edge Storage Optimized device.
D: Use an AWS Server Migration Service (AWS SMS) replication job to migrate the application server VM to AWS.
E: Use an AWS Database Migration Service (AWS DMS) replication instance to migrate the database to an Amazon RDS DB instance.
Show Answer
Correct Answer:
Use an AWS Server Migration Service (AWS SMS) replication job to migrate the database server VM to AWS., Use an AWS Server Migration Service (AWS SMS) replication job to migrate the application server VM to AWS.
Explanation
The primary goal is to migrate two very large (500 TB each) VMware VMs with the least amount of downtime, using an available 10 Gbps Direct Connect link. AWS Server Migration Service (SMS) is the ideal tool for this "lift-and-shift" migration. SMS automates the migration of on-premises VMs to AWS by creating replication jobs. It performs an initial full replication of the server volumes followed by periodic, incremental replications of changes. This process occurs while the source servers remain online. The final cutover requires only a very short downtime to perform the last incremental sync before launching the new EC2 instances. Using SMS for both the application server (D) and the database server (A) provides a consistent, low-risk, and minimally disruptive migration strategy.
Why Incorrect Options are Wrong

B. VM Import/Export is an offline process. It requires exporting the entire 500 TB VM image and then uploading it, which would cause extensive downtime, violating the core requirement.

C. An AWS Snowball device is for offline data transfer. While suitable for large data volumes, it is not the optimal choice for minimizing downtime when a high-bandwidth (10 Gbps) network connection is available for online, incremental replication.

E. AWS Database Migration Service (DMS) migrates the database data, not the entire server VM. This would involve re-platforming to a service like Amazon RDS, which adds complexity and risk compared to a direct lift-and-shift of the existing server using SMS.

References

1. AWS Server Migration Service (SMS) User Guide: "AWS Server Migration Service (AWS SMS) is an agentless service which makes it easier and faster for you to migrate thousands of on-premises workloads to AWS. AWS SMS allows you to automate, schedule, and track incremental replications of live server volumes, making it easier for you to coordinate large-scale server migrations." (Source: AWS Server Migration Service User Guide, "What Is AWS Server Migration Service?")

2. AWS Server Migration Service (SMS) User Guide, "How AWS Server Migration Service Works": "AWS SMS incrementally replicates your server VMs as Amazon Machine Images (AMIs)... The incremental replication transfers only the delta changes to AWS, which results in faster replication times and minimum network bandwidth consumption." This directly supports the minimal downtime requirement.

3. AWS Documentation, "VM Import/Export, What Is VM Import/Export?": "VM Import/Export enables you to easily import virtual machine (VM) images from your existing virtualization environment to Amazon EC2..." The process described is a one-time import of a static image, not a continuous replication of a live server, making it unsuitable for minimal downtime scenarios.

4. AWS Database Migration Service (DMS) User Guide, "What is AWS Database Migration Service?": "AWS Database Migration Service (AWS DMS) helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime..." While DMS minimizes downtime for the database data, it does not migrate the server OS or configuration, making SMS a better fit for a complete server lift-and-shift.

Question 8

A company runs applications in hundreds of production AWS accounts. The company uses AWS Organizations with all features enabled and has a centralized backup operation that uses AWS Backup. The company is concerned about ransomware attacks. To address this concern, the company has created a new policy that all backups must be resilient to breaches of privileged-user credentials in any production account. Which combination of steps will meet this new requirement? (Select THREE.)
Options
A: Implement cross-account backup with AWS Backup vaults in designated non-production accounts.
B: Add an SCP that restricts the modification of AWS Backup vaults.
C: Implement AWS Backup Vault Lock in compliance mode.
D: Configure the backup frequency, lifecycle, and retention period to ensure that at least one backup always exists in the cold tier.
E: Configure AWS Backup to write all backups to an Amazon S3 bucket in a designated non- production account. Ensure that the S3 bucket has S3 Object Lock enabled.
F: Implement least privilege access for the IAM service role that is assigned to AWS Backup.
Show Answer
Correct Answer:
Implement cross-account backup with AWS Backup vaults in designated non-production accounts., Add an SCP that restricts the modification of AWS Backup vaults., Implement AWS Backup Vault Lock in compliance mode.
Explanation
This combination creates a multi-layered, defense-in-depth strategy against ransomware and insider threats. 1. Cross-account backup (A) isolates backups into a dedicated, non-production account. This segregation is the first line of defense, as a compromised privileged user in a production account lacks the credentials to access or manage resources in the separate backup account. 2. AWS Backup Vault Lock in compliance mode (C) makes the recovery points within the vault immutable (Write-Once-Read-Many, or WORM). Once locked, no user, including the root user of the backup account, can delete the backups or shorten the retention period until it expires. 3. Service Control Policies (SCPs) (B) act as organizational guardrails, preventing users in production accounts—even privileged ones—from altering or disabling the backup policies that send data to the central, locked vault.
Why Incorrect Options are Wrong

D. Moving backups to a cold tier is a cost and lifecycle management strategy; it does not provide protection against deletion commands from a privileged user.

E. AWS Backup natively uses backup vaults for storage. While these vaults use Amazon S3, you don't configure Backup to write directly to a user-managed S3 bucket with Object Lock; you use the integrated AWS Backup Vault Lock feature.

F. Implementing least privilege for the backup role is a standard security best practice but is insufficient protection against an already compromised privileged user who can alter IAM roles and policies.

References

1. AWS Backup Developer Guide, "Security in AWS Backup": The section "Resilience" outlines best practices against ransomware, stating: "To protect your backups from inadvertent or malicious activity... we recommend that you copy your backups to accounts that are isolated from your production accounts... You can also use AWS Backup Vault Lock to make your backups immutable." This supports options A and C.

2. AWS Backup Developer Guide, "Protecting backups from manual deletion": This section details AWS Backup Vault Lock. It specifies, "In compliance mode, a vault lock can't be disabled or deleted by any user or by AWS. The retention period can't be shortened." This confirms the immutability provided by option C.

3. AWS Organizations User Guide, "Service control policies (SCPs)": The guide explains, "SCPs are a type of organization policy that you can use to manage permissions in your organization... SCPs offer central control over the maximum available permissions for all accounts in your organization," including restricting privileged users. This supports using an SCP (Option B) as a guardrail.

4. AWS Security Blog, "How to help protect your backups from ransomware with AWS Backup": This article explicitly recommends a three-pronged strategy: "1. Centralize and segregate your backups into a dedicated backup account. 2. Make your backups immutable by using Backup Vault Lock. 3. Secure your backup account with preventative controls [such as SCPs]." This directly validates the combination of A, B, and C.

Question 9

A company is expanding. The company plans to separate its resources into hundreds of different AWS accounts in multiple AWS Regions. A solutions architect must recommend a solution that denies access to any operations outside of specifically designated Regions. Which solution will meet these requirements?
Options
A: Create IAM roles for each account. Create IAM policies with conditional allow permissions that include only approved Regions for the accounts.
B: Create an organization in AWS Organizations. Create IAM users for each account. Attach a policy to each user to block access to Regions where an account cannot deploy infrastructure.
C: Launch an AWS Control Tower landing zone. Create OUs and attach SCPs that deny access to run services outside of the approved Regions.
D: Enable AWS Security Hub in each account. Create controls to specify the Regions where an account can deploy infrastructure.
Show Answer
Correct Answer:
Launch an AWS Control Tower landing zone. Create OUs and attach SCPs that deny access to run services outside of the approved Regions.
Explanation
This solution leverages AWS Control Tower to establish a well-architected, multi-account environment, which is ideal for managing hundreds of accounts. Control Tower uses AWS Organizations to group accounts into Organizational Units (OUs). The core of the solution is the use of Service Control Policies (SCPs). An SCP can be attached to an OU to enforce a preventative guardrail that denies API actions outside of specified AWS Regions. This is achieved by creating a Deny policy that checks the aws:RequestedRegion condition key. This approach is centrally managed, highly scalable, and ensures that even administrators in member accounts cannot bypass the regional restrictions.
Why Incorrect Options are Wrong

A. Managing IAM roles and policies individually across hundreds of accounts is not scalable and lacks the strong, centralized enforcement provided by SCPs.

B. Attaching policies to individual IAM users across hundreds of accounts is operationally complex and does not scale effectively for an organization-wide requirement.

D. AWS Security Hub is a detective control service used for monitoring compliance and aggregating security findings; it does not prevent or deny actions.

References

1. AWS Organizations User Guide, "Service control policies (SCPs)": "SCPs are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization... SCPs are powerful because they affect all users, including the root user, for an account." This document also provides an example SCP to "Deny access to AWS based on the requested AWS Region".

2. AWS Control Tower User Guide, "How guardrails work": "Preventive guardrails are enforced using service control policies (SCPs)... A preventive guardrail ensures that your accounts maintain compliance, because it disallows actions that lead to policy violations. For example, the guardrail Disallow changes to AWS Config rules set up by AWS Control Tower prevents any IAM user or role from making changes to the AWS Config rules that are created by AWS Control Tower." This demonstrates the preventative nature of controls implemented via SCPs.

3. AWS Identity and Access Management User Guide, "AWS global condition context keys": The documentation for the aws:RequestedRegion key states, "Use this key to compare the Region that is specified in the request with the Region that is specified in the policy." This is the specific key used in an SCP to enforce regional restrictions.

4. AWS Security Hub User Guide, "What is AWS Security Hub?": "AWS Security Hub is a cloud security posture management (CSPM) service that performs security best practice checks, aggregates alerts, and enables automated remediation." This confirms its role as a monitoring and detection service, not a preventative one.

Question 10

A company is migrating its legacy .NET workload to AWS. The company has a containerized setup that includes a base container image. The base image is tens of gigabytes in size because of legacy libraries and other dependencies. The company has images for custom developed components that are dependent on the base image. The company will use Amazon Elastic Container Registry (Amazon ECR) as part of its solution on AWS. Which solution will provide the LOWEST container startup time on AWS?
Options
A: Use Amazon ECR to store the base image and the images for the custom developed components. Use Amazon Elastic Container Service (Amazon ECS) onAWS Fargate to run the workload.
B: Use Amazon ECR to store the base image and the images for the custom developed components. Use AWS App Runner to run the workload.
C: Use Amazon ECR to store the images for the custom developed components. Create an AMI that contains the base image. Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 instances that are based on the AMI to run the workload
D: Use Amazon ECR to store the images for the custom developed components. Create an AMI that contains the base image. Use Amazon Elastic Kubernetes Service (Amazon EKS) on AWS Fargate with the AMI to run the workload.
Show Answer
Correct Answer:
Use Amazon ECR to store the images for the custom developed components. Create an AMI that contains the base image. Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 instances that are based on the AMI to run the workload
Explanation
The primary challenge is the "tens of gigabytes" base container image, which will cause significant delays if pulled from a registry at runtime. The most effective strategy to minimize container startup time is to pre-load this large, static base image onto the compute nodes. Option C achieves this by baking the base image into a custom Amazon Machine Image (AMI). When Amazon EC2 instances for the Amazon ECS cluster are launched from this AMI, the large base image is already present on the local disk. Consequently, when an ECS task starts, the container runtime only needs to pull the much smaller, custom component images from Amazon ECR. This drastically reduces the network I/O and data transfer at launch, providing the lowest startup time.
Why Incorrect Options are Wrong

A. Using AWS Fargate requires pulling the entire image, including the massive base layer, from Amazon ECR for every new task, which will result in very long startup times.

B. AWS App Runner is a fully managed service, often built on Fargate, and would face the same performance bottleneck of pulling the large image from the registry at startup.

D. This option is technically invalid. AWS Fargate is a serverless compute option where AWS manages the underlying infrastructure; you cannot specify a custom AMI for Fargate nodes.

References

1. AWS Compute Blog, "Speeding up container-based application launches with image pre-caching on Amazon ECS": This article discusses strategies for reducing container launch times. It explicitly states, "For EC2 launch type, you can create a custom AMI with container images pre-pulled on the instance. This is the most effective way to reduce image pull latency..." This directly validates the approach in option C.

2. AWS Documentation, "Amazon ECS-optimized AMIs": This documentation, while focusing on the standard AMIs, provides the basis for customization. It notes, "You can also create your own custom AMI that meets the Amazon ECS AMI specification." This confirms that creating a custom AMI with pre-loaded software (like a container base image) is a standard and supported practice for ECS on EC2.

3. AWS Documentation, "AWS Fargate": The official documentation describes Fargate as a technology that "removes the need to provision and manage servers." This serverless model means users do not have access to the underlying instances to customize the AMI, which invalidates option D and highlights the performance issue in options A and B.

4. AWS Documentation, "Amazon EKS on AWS Fargate": In the considerations section, the documentation states, "You don't need to... update AMIs." This confirms that for EKS on Fargate, custom AMIs are not a feature, making the solution proposed in option D impossible to implement.

Question 11

A company has an application that uses an Amazon Aurora PostgreSQL DB cluster for the application's database. The DB cluster contains one small primary instance and three larger replica instances. The application runs on an AWS Lambda function. The application makes many short-lived connections to the database's replica instances to perform read-only operations. During periods of high traffic, the application becomes unreliable and the database reports that too many connections are being established. The frequency of high-traffic periods is unpredictable. Which solution will improve the reliability of the application?
Options
A: Use Amazon RDS Proxy to create a proxy for the DB cluster. Configure a read-only endpoint for the proxy. Update the Lambda function to connect to the proxyendpoint.
B: Increase the max_connections setting on the DB cluster's parameter group. Reboot all the instances in the DB cluster. Update the Lambda function to connect to the DB cluster endpoint.
C: Configure instance scaling for the DB cluster to occur when the DatabaseConnections metric is close to the max _ connections setting. Update the Lambda function to connect to the Aurora reader endpoint.
D: Use Amazon RDS Proxy to create a proxy for the DB cluster. Configure a read-only endpoint for the Aurora Data API on the proxy. Update the Lambda function to connect to the proxy endpoint.
Show Answer
Correct Answer:
Use Amazon RDS Proxy to create a proxy for the DB cluster. Configure a read-only endpoint for the proxy. Update the Lambda function to connect to the proxyendpoint.
Explanation
The core issue is connection exhaustion caused by a highly concurrent, serverless application (AWS Lambda) making many short-lived connections. Amazon RDS Proxy is specifically designed to solve this problem. It establishes and manages a pool of database connections. Lambda functions connect to the proxy, which then serves requests using the pooled connections, significantly reducing the number of direct connections opened on the database instances. This prevents the "too many connections" error and improves reliability. Creating a read-only endpoint for the proxy correctly routes the application's read-only queries to the Aurora replica instances, aligning with the described architecture.
Why Incorrect Options are Wrong

B: Increasing maxconnections consumes more database memory and is not a scalable, long-term solution for inefficient connection management from a serverless application.

C: Adding more replica instances (instance scaling) does not solve the core problem of connection exhaustion; it only adds more instances that can also run out of connections.

D: This option incorrectly combines two separate services. Amazon RDS Proxy and the Aurora Data API are different solutions for connection management; you cannot configure a Data API endpoint on an RDS Proxy.

References

1. AWS Documentation, Amazon RDS User Guide: "Managing connections with Amazon RDS Proxy". This document states, "RDS Proxy allows applications to pool and share connections established with the database. This improves database efficiency and application scalability... This approach is especially useful for serverless applications that have many short-lived connections."

2. AWS Documentation, Amazon RDS User Guide: "Using Amazon RDS Proxy with AWS Lambda". This guide explicitly details the problem scenario: "A Lambda function can establish a large number of simultaneous connections... This large number of connections can overwhelm the database... With RDS Proxy, your Lambda function can reach high concurrency levels without exhausting database connections."

3. AWS Documentation, Amazon RDS User Guide: "Overview of RDS Proxy endpoints". This section explains the functionality of proxy endpoints, including how custom read-only endpoints can be created to connect to the reader instances in a cluster. It states, "For a reader farm, you can associate a read-only endpoint with the proxy. This way, your proxy can connect to the reader DB instances in a multi-AZ DB cluster."

4. AWS Documentation, Amazon Aurora User Guide: "Parameter groups for Aurora DB clusters". The documentation for the maxconnections parameter notes that its default value is derived from the DBInstanceClassMemory variable, illustrating the link between connection count and instance memory resources, which supports why simply increasing it (Option B) is not an optimal solution.

Question 12

A company is migrating its infrastructure to the AWS Cloud. The company must comply with a variety of regulatory standards for different projects. The company needs a multi-account environment. A solutions architect needs to prepare the baseline infrastructure. The solution must provide a consistent baseline of management and security, but it must allow flexibility for different compliance requirements within various AWS accounts. The solution also needs to integrate with the existing on- premises Active Directory Federation Services (AD FS) server. Which solution meets these requirements with the LEAST amount of operational overhead?
Options
A: Create an organization in AWS Organizations. Create a single SCP for least privilege access across all accounts. Create a single OU for all accounts.Configure an IAM identity provider for federation with the on-premises AD FS server. Configure a central logging account with a defined process for loggenerating services to send log events to the central account. Enable AWS Config in the central account with conformance packs for all accounts.
B: Create an organization in AWS Organizations. Enable AWS Control Tower on the organization. Review included controls (guardrails) for SCPs. Check AWSConfig for areas that require additions. Add OUS as necessary. Connect AWS IAM Identity Center (AWS Single Sign-On) to the on-premises AD FS server.
C: Create an organization in AWS Organizations. Create SCPs for least privilege access. Create an OU structure, and use it to group AWS accounts. ConnectAWS IAM Identity Center (AWS Single Sign-On) to the on-premises AD FS server. Configure a central logging account with a defined process for loggenerating services to send log events to the central account. Enable AWS Config in the central account with aggregators and conformance packs.
D: Create an organization in AWS Organizations. Enable AWS Control Tower on the organization. Review included controls (guardrails) for SCPs. Check AWSConfig for areas that require additions. Configure an IAM identity provider for federation with the on-premises AD FS server.
Show Answer
Correct Answer:
Create an organization in AWS Organizations. Enable AWS Control Tower on the organization. Review included controls (guardrails) for SCPs. Check AWSConfig for areas that require additions. Add OUS as necessary. Connect AWS IAM Identity Center (AWS Single Sign-On) to the on-premises AD FS server.
Explanation
AWS Control Tower automates creation of a governed multi-account landing zone that applies mandatory and elective guardrails (implemented as SCPs and AWS Config rules) across OUs while still allowing account-level flexibility for additional compliance controls. Control Tower also provisions centralized logging and AWS Config aggregation for all enrolled accounts, removing the need to build these services manually and thereby minimizing operational overhead. IAM Identity Center (AWS SSO), which Control Tower deploys by default, supports external SAML-based identity providers; it can be connected to the existing on-premises AD FS to provide seamless federated access.
Why Incorrect Options are Wrong

A. Single SCP and single OU cannot address differing compliance needs; manual Config and logging setup increases operational effort compared with Control Tower’s automated landing-zone deployment.

C. Manually designing OUs, SCPs, Config aggregators, and logging meets requirements but requires continuous custom maintenance, producing higher operational overhead than Control Tower’s managed solution.

D. Control Tower plus per-account IAM SAML providers mandates manual identity federation setup and lifecycle management in every account, eliminating the low-overhead benefit of AWS Identity Center.

References

1. AWS Control Tower User Guide, “Benefits of AWS Control Tower” & “How AWS Control Tower works,” Sections 1.1–1.3 (2023-09-26).

2. AWS Control Tower Landing Zone: Governance using guardrails (SCPs & AWS Config Rules), User Guide §3.2.

3. AWS IAM Identity Center Administrator Guide, “Enable identity federation using AD FS,” Steps 1–6 (2023-08-02).

4. AWS Whitepaper: “Organizing Your AWS Environment Using Multiple Accounts,” pp. 15–17, “Using AWS Control Tower for Low-Touch Governance” (2022).

5. MIT Cybersecurity Course Notes (6.858), Lecture “Cloud Governance Models,” slide deck pp. 10–11 describing automated landing-zone frameworks (citing AWS Control Tower).

Question 13

A company has a project that is launching Amazon EC2 instances that are larger than required. The project's account cannot be part of the company's organization in AWS Organizations due to policy restrictions to keep this activity outside of corporate IT. The company wants to allow only the launch of t3.small EC2 instances by developers in the project's account. These EC2 instances must be restricted to the us-east-2 Region. What should a solutions architect do to meet these requirements?
Options
A: Create a new developer account. Move all EC2 instances, users, and assets into us-east-2. Add the account to the company's organization in AWS Organizations. Enforce a tagging policy that denotes Region affinity.
B: Create an SCP that denies the launch of all EC2 instances except t3.small EC2 instances in us-east- 2. Attach the SCP to the project's account.
C: Create and purchase a t3.small EC2 Reserved Instance for each developer in us-east-2. Assign each developer a specific EC2 instance with their name as the tag.
D: Create an IAM policy than allows the launch of only t3.small EC2 instances in us-east-2. Attach the policy to the roles and groups that the developers use in the project's account.
Show Answer
Correct Answer:
Create an IAM policy than allows the launch of only t3.small EC2 instances in us-east-2. Attach the policy to the roles and groups that the developers use in the project's account.
Explanation
The most effective and direct way to enforce resource-specific permissions within a single AWS account is by using IAM policies. An IAM policy can be crafted with Condition elements to restrict the ec2:RunInstances action. By using the ec2:InstanceType condition key, the policy can explicitly allow only t3.small. Similarly, the aws:RequestedRegion global condition key can restrict the action to the us-east-2 Region. Attaching this policy to the IAM roles and groups used by developers ensures that these restrictions are enforced at the identity level, which is the correct approach for a standalone account.
Why Incorrect Options are Wrong

A. This is incorrect because the question explicitly states the account cannot be part of an AWS Organization. Tagging policies enforce tag compliance, not resource creation restrictions like instance type or Region.

B. This is incorrect because Service Control Policies (SCPs) are a feature of AWS Organizations. Since the account cannot be part of the organization, SCPs cannot be applied.

C. This is incorrect because Reserved Instances are a billing discount mechanism and do not restrict permissions. A developer with the necessary IAM permissions could still launch any instance type, regardless of what is reserved.

References

1. IAM User Guide - Actions, resources, and condition keys for Amazon EC2: This document lists the ec2:InstanceType condition key, which can be used in an IAM policy to control which instance types a user can launch. (See the table under the "Condition keys for Amazon EC2" section).

2. IAM User Guide - AWS global condition context keys: This guide details the aws:RequestedRegion key, which can be used in the Condition block of an IAM policy to restrict actions to specific AWS Regions. (See the table of "Global condition context keys").

3. AWS Organizations User Guide - Service control policies (SCPs): This document states, "SCPs are a type of organization policy that you can use to manage permissions in your organization." This confirms SCPs are only applicable to accounts within an AWS Organization. (See the "Introduction to SCPs" section).

4. Amazon EC2 User Guide for Linux Instances - Reserved Instances: This documentation describes Reserved Instances as a billing construct that provides a discount compared to On-Demand pricing, confirming they are not a permissions-enforcement tool. (See the "What are Reserved Instances?" section).

Question 14

A company is running a workload that consists of thousands of Amazon EC2 instances. The workload is running in a VPC that contains several public subnets and private subnets. The public subnets have a route for 0.0.0.0/0 to an existing internet gateway. The private subnets have a route for 0.0.0.0/0 to an existing NAT gateway. A solutions architect needs to migrate the entire fleet of EC2 instances to use IPv6. The EC2 instances that are in private subnets must not be accessible from the public internet. What should the solutions architect do to meet these requirements?
Options
A: Update the existing VPC, and associate a custom IPv6 CIDR block with the VPC and all subnets. Update all the VPC route tables, and add a route for ::/0 to the internet gateway.
B: Update the existing VPC, and associate an Amazon-provided IPv6 CIDR block with the VPC and all subnets. Update the VPC route tables for all private subnets, and add a route for ::/0 to the NAT gateway.
C: Update the existing VPC, and associate an Amazon-provided IPv6 CIDR block with the VPC and all subnets. Create an egress-only internet gateway. Update the VPC route tables for all private subnets, and add a route for ::/0 to the egress-only internet gateway.
D: Update the existing VPC, and associate a custom IPv6 CIDR block with the VPC and all subnets. Create a new NAT gateway, and enable IPv6 support. Update the VPC route tables for all private subnets, and add a route for ::/0 to the IPv6-enabled NAT gateway.
Show Answer
Correct Answer:
Update the existing VPC, and associate an Amazon-provided IPv6 CIDR block with the VPC and all subnets. Create an egress-only internet gateway. Update the VPC route tables for all private subnets, and add a route for ::/0 to the egress-only internet gateway.
Explanation
To migrate the workload to IPv6 while keeping private instances inaccessible from the internet, the correct approach is to use an Egress-Only Internet Gateway (EIGW). First, an Amazon-provided IPv6 CIDR block must be associated with the VPC and its subnets. An EIGW is a stateful gateway specifically designed for IPv6. It allows outbound traffic from instances in the VPC to the internet but prevents the internet from initiating an IPv6 connection with those instances. By creating an EIGW and adding a route for all IPv6 traffic (::/0) from the private subnets to this EIGW, the instances can initiate outbound connections (e.g., for software updates) without being publicly exposed.
Why Incorrect Options are Wrong

A: Routing all IPv6 traffic (::/0) from private subnets to the main internet gateway would assign public IPv6 addresses and make the instances directly accessible from the internet, violating a core requirement.

B: Standard NAT gateways are designed for IPv4 traffic. They perform network address translation for IPv4 addresses and cannot process or route native IPv6 traffic.

D: While NAT gateways can be used for NAT64 (translating IPv6 to IPv4), the purpose-built, standard AWS solution for providing outbound-only internet access for IPv6 is the Egress-Only Internet Gateway, making it the correct choice.

References

1. AWS Documentation - Egress-only internet gateways: "To allow outbound-only communication over IPv6 from instances in your VPC to the internet, you can use an egress-only internet gateway... An egress-only internet gateway is stateful: It forwards traffic from the instances in the subnet to the internet or other AWS services, and then sends the response back to the instances. It does not allow unsolicited inbound traffic from the internet to your instances." (AWS VPC User Guide, "Egress-only internet gateways" section).

2. AWS Documentation - Enable IPv6 traffic for a private subnet: "Create an egress-only internet gateway for your VPC... In the route table for your private subnet, add a route that points all outbound IPv6 traffic (::/0) to the egress-only internet gateway." (AWS VPC User Guide, "IPv6" section, under "Example routing options").

3. AWS Documentation - NAT gateways: "NAT gateways currently support IPv4 traffic." and "If you have instances in a private subnet that are IPv6-only, you can use a NAT gateway to enable these instances to communicate with IPv4-only services... by using NAT64." This confirms standard NAT gateways are for IPv4, and while NAT64 exists, the EIGW is the direct solution for native IPv6 outbound traffic. (AWS VPC User Guide, "NAT gateways" section).

Question 15

A Solutions Architect wants to make sure that only AWS users or roles with suitable permissions can access a new Amazon API Gateway endpoint. The Solutions Architect wants an end-to-end view of each request to analyze the latency of the request and create service maps. How can the Solutions Architect design the API Gateway access control and perform request inspections?
Options
A: For the API Gateway method, set the authorization to AWS_IAM. Then, give the IAM user or role execute-api:Invoke permission on the REST API resource. Enable the API caller to sign requests with AWS Signature when accessing the endpoint. Use AWS X-Ray to trace and analyze user requests to API Gateway.
B: For the API Gateway resource, set CORS to enabled and only return the company's domain in Access-Control-Allow-Origin headers. Then, give the IAM user or role execute-api:Invoke permission on the REST API resource. Use Amazon CloudWatch to trace and analyze user requests to API Gateway.
C: Create an AWS Lambda function as the custom authorizer, ask the API client to pass the key and secret when making the call, and then use Lambda to validate the key/secret pair against the IAM system. Use AWS X-Ray to trace and analyze user requests to API Gateway.
D: Create a client certificate for API Gateway. Distribute the certificate to the AWS users and roles that need to access the endpoint. Enable the API caller to pass the client certificate when accessing the endpoint. Use Amazon CloudWatch to trace and analyze user requests to API Gateway.
Show Answer
Correct Answer:
For the API Gateway method, set the authorization to AWS_IAM. Then, give the IAM user or role execute-api:Invoke permission on the REST API resource. Enable the API caller to sign requests with AWS Signature when accessing the endpoint. Use AWS X-Ray to trace and analyze user requests to API Gateway.
Explanation
This solution correctly addresses both requirements using the most appropriate AWS services. Setting the authorization type to AWSIAM on the API Gateway method is the standard and secure way to control access based on IAM principals (users or roles). The client must then sign the request using AWS Signature Version 4, and API Gateway validates the signature against the caller's IAM permissions. For end-to-end request analysis, enabling AWS X-Ray for the API Gateway stage allows for tracing requests as they travel through API Gateway to downstream services. X-Ray provides latency analysis and generates service maps, fulfilling the second requirement.
Why Incorrect Options are Wrong

B. CORS is a browser security feature for controlling cross-domain requests; it is not an authentication or authorization mechanism for IAM principals.

C. A Lambda authorizer is a valid authentication method, but it is overly complex for this use case when native AWSIAM authorization directly meets the requirement.

D. Client certificates authenticate the client but are not directly integrated with IAM roles or users for authorization. CloudWatch provides logs and metrics, not end-to-end tracing and service maps like X-Ray.

References

1. AWS Documentation: Control access to an API with IAM permissions. This document explicitly states, "To control access to your API, you can use IAM permissions... you set the authorizationType property of a method to AWSIAM." It also details the need for the execute-api:Invoke permission. (Amazon Web Services, API Gateway Developer Guide, Section: "Control access to an API with IAM permissions").

2. AWS Documentation: Using AWS X-Ray to trace API Gateway requests. This guide explains, "You can use AWS X-Ray to trace and analyze user requests as they travel through your Amazon API Gateway APIs to the underlying services... X-Ray gives you an end-to-end view of an entire request". (Amazon Web Services, API Gateway Developer Guide, Section: "Tracing and analyzing requests with AWS X-Ray").

3. AWS Documentation: Service maps. This page describes how X-Ray uses trace data to generate a service map, which "shows service nodes, their connections, and health data for each node, including average latency and failures." This directly supports the requirement for service maps and latency analysis. (Amazon Web Services, AWS X-Ray Developer Guide, Section: "Viewing the service map").

4. AWS Documentation: Enabling CORS for a REST API resource. This document clarifies that CORS is for enabling clients in one domain to interact with resources in a different domain, highlighting its purpose is not IAM-based authorization. (Amazon Web Services, API Gateway Developer Guide, Section: "Enabling CORS for a REST API resource").

Question 16

A North American company with headquarters on the East Coast is deploying a new web application running on Amazon EC2 in the us-east-1 Region. The application should dynamically scale to meet user demand and maintain resiliency. Additionally, the application must have disaster recover capabilities in an active-passive configuration with the us-west-1 Region. Which steps should a solutions architect take after creating a VPC in the us-east-1 Region?
Options
A: Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect both VPCs. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs) to the VPC in the us- east-1 Region. Deploy EC2 instances across multiple AZs in each Region as part of an Auto Scaling group spanning both VPCs and served by the ALB.
B: Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs) to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs as part of an Auto Scaling group served by the ALB. Deploy the same solution to the us-west-1 Region. Create an Amazon Route 53 record set with a failover routing policy and health checks enabled to provide high availability across both Regions.
C: Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect both VPCs. Deploy an Application Load Balancer (ALB) that spans both VPCs. Deploy EC2 instances across multiple Availability Zones as part of an Auto Scaling group in each VPC served by the ALB. Create an Amazon Route 53 record that points to the ALB.
D: Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs) to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs as part of an Auto Scaling group served by the ALB. Deploy the same solution to the us-west-1 Region. Create separate Amazon Route 53 records in each Region that point to the ALB in the Region. Use Route 53 health checks to provide high availability across both Regions.
Show Answer
Correct Answer:
Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs) to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs as part of an Auto Scaling group served by the ALB. Deploy the same solution to the us-west-1 Region. Create an Amazon Route 53 record set with a failover routing policy and health checks enabled to provide high availability across both Regions.
Explanation
This solution correctly implements a multi-region, active-passive disaster recovery (DR) architecture. First, it establishes a highly available and scalable application stack within the primary region (us-east-1) using an Application Load Balancer (ALB) and an Auto Scaling group across multiple Availability Zones (AZs). It then replicates this identical, resilient infrastructure in the DR region (us-west-1). The key component for the active-passive DR strategy is Amazon Route 53 with a failover routing policy. Route 53 health checks monitor the primary endpoint's health. If the primary region becomes unavailable, Route 53 automatically reroutes traffic to the passive, healthy endpoint in the DR region, fulfilling the DR requirement.
Why Incorrect Options are Wrong

A. An Auto Scaling group is a regional construct and cannot span across multiple VPCs in different AWS Regions. An ALB in one region cannot directly serve EC2 instances in another.

C. An Application Load Balancer is a regional service and cannot span across VPCs in different AWS Regions. Inter-Region VPC peering is for private network traffic, not public web application failover.

D. Amazon Route 53 is a global service, not a regional one; you create record sets in a global hosted zone. This option is less precise than option B, which correctly specifies using a "failover routing policy."

References

1. AWS Documentation, "Disaster Recovery of Workloads on AWS: Recovery in the Cloud" (July 2021): This whitepaper describes DR strategies. The "Warm Standby" and "Pilot Light" approaches, which are active-passive models, use the exact architecture described in the correct answer. Page 15 states, "For all of these approaches, you can use Amazon Route 53 to resolve your domain name and to check the health of your primary environment. In the event of a disaster, you can have Route 53 fail over to your DR environment."

2. AWS Documentation, "Amazon Route 53 Developer Guide": Under the section "Choosing a routing policy," the guide explains "Failover routing." It states, "Use failover routing when you want to configure active-passive failover. When the primary resource becomes unhealthy, Route 53 automatically responds to queries with the secondary resource." This directly supports the use of Route 53 for the required active-passive configuration.

3. AWS Documentation, "User Guide for Application Load Balancers": In the "What is an Application Load Balancer?" section, it is established that a load balancer serves traffic to targets, such as EC2 instances, in a single region. It states, "You can add one or more listeners to your load balancer... Each listener has a rule that forwards requests to one or more target groups in the same Region." This confirms ALBs are regional.

4. AWS Documentation, "Amazon EC2 Auto Scaling User Guide": The guide's core concepts explain that an Auto Scaling group contains a collection of Amazon EC2 instances within a single AWS Region. The documentation on "Working with multiple Availability Zones" confirms that while an ASG can span AZs, it is confined to the region where it was created.

Question 17

A company hosts a data-processing application on Amazon EC2 instances. The application polls an Amazon Elastic File System (Amazon EFS) file system for newly uploaded files. When a new file is detected, the application extracts data from the file and runs logic to select a Docker container image to process the file. The application starts the appropriate container image and passes the file location as a parameter. The data processing that the container performs can take up to 2 hours. When the processing is complete, the code that runs inside the container writes the file back to Amazon EFS and exits. The company needs to refactor the application to eliminate the EC2 instances that are running the containers Which solution will meet these requirements?
Options
A: Create an Amazon Elastic Container Service (Amazon ECS) cluster. Configure the processing to run as AWS Fargate tasks. Extract the container selection logic to run as an Amazon EventBridge rule that starts the appropriate Fargate task. Configure the EventBridge rule to run when files are added to the EFS file system.
B: Create an Amazon Elastic Container Service (Amazon ECS) cluster. Configure the processing to run as AWS Fargate tasks. Update and containerize the container selection logic to run as a Fargate service that starts the appropriate Fargate task. Configure an EFS event notification to invoke the Fargate service when files are added to the EFS file system.
C: Create an Amazon Elastic Container Service (Amazon ECS) cluster. Configure the processing to run as AWS Fargate tasks. Extract the container selection logic to run as an AWS Lambda function that starts the appropriate Fargate task. Migrate the storage of file uploads to an Amazon S3 bucket. Update theprocessing code to use Amazon S3. Configure an S3 event notification to invoke the Lambda function when objects are created.
D: Create AWS Lambda container images for the processing. Configure Lambda functions to use the container images. Extract the container selection logic torun as a decision Lambda function that invokes the appropriate Lambda processing function. Migrate the storage of file uploads to an Amazon S3 bucket.Update the processing code to use Amazon S3. Configure an S3 event notification to invoke the decision Lambda function when objects are created.
Show Answer
Correct Answer:
Create an Amazon Elastic Container Service (Amazon ECS) cluster. Configure the processing to run as AWS Fargate tasks. Extract the container selection logic to run as an AWS Lambda function that starts the appropriate Fargate task. Migrate the storage of file uploads to an Amazon S3 bucket. Update theprocessing code to use Amazon S3. Configure an S3 event notification to invoke the Lambda function when objects are created.
Explanation
This solution provides a robust, event-driven, and serverless architecture that meets all requirements. Migrating file storage from Amazon EFS to Amazon S3 is a key step, as S3 provides native event notifications. An S3 ObjectCreated event can trigger an AWS Lambda function. This Lambda function, acting as a lightweight orchestrator, executes the container selection logic and then starts the appropriate container as an AWS Fargate task using the RunTask API call. Fargate is a serverless compute engine for containers that can run tasks for much longer than the 2-hour requirement, and it completely eliminates the need to manage the underlying EC2 instances.
Why Incorrect Options are Wrong

A. Amazon EventBridge does not have a native, direct integration to trigger rules based on file creation events within an Amazon EFS file system. An intermediary polling mechanism would still be required.

B. Amazon EFS does not have a feature called "EFS event notification" that can directly invoke other AWS services like AWS Fargate. This trigger mechanism is fictitious.

D. AWS Lambda functions, whether using a ZIP archive or a container image, have a maximum execution timeout of 15 minutes (900 seconds). This is insufficient for the data processing job that can take up to 2 hours.

References

1. AWS Lambda Quotas: The official AWS Lambda Developer Guide states the maximum execution duration for a function.

Source: AWS Lambda Developer Guide, Quotas.

Reference: Under the "Function configuration, deployment, and execution quotas" table, the "Timeout" resource has a default of 3 seconds and a maximum of 900 seconds (15 minutes). This directly invalidates option D.

Link: https://docs.aws.amazon.com/lambda/latest/dg/quotas.html

2. Using AWS Lambda with Amazon S3: The Amazon S3 User Guide details how to use S3 event notifications to trigger Lambda functions, which is the pattern proposed in options C and D.

Source: Amazon S3 User Guide, Invoking AWS Lambda functions from Amazon S3.

Reference: The section "Walkthrough: Using an S3 trigger to invoke a Lambda function" describes this exact integration.

Link: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html

3. Orchestrating Long-Running Jobs with Fargate: The pattern of using a short-lived service (like Lambda) to trigger a long-running container task on Fargate is a documented best practice.

Source: AWS Whitepaper, "Serverless Architectures with AWS Lambda".

Reference: Section "Orchestrating Multiple AWS Lambda Functions for a Long-Running Workflow". While this discusses Step Functions, the principle of offloading long-running tasks from Lambda to a suitable service like Fargate is a core concept. The RunTask API for ECS/Fargate is designed for this purpose.

Link: https://d1.awsstatic.com/whitepapers/serverless-architectures-with-aws-lambda.pdf (Page 13 discusses offloading tasks).

4. Amazon EFS Integrations: The EFS documentation does not list a direct event source integration for services like EventBridge or a native "event notification" system for file creation.

Source: Amazon EFS User Guide.

Reference: A review of the "Working with other AWS services" section shows integrations with services like DataSync, Backup, and File Gateway, but no direct, push-based event notification mechanism for file system operations. This invalidates the triggers in options A and B.

Link: https://docs.aws.amazon.com/efs/latest/userguide/how-it-works.html#how-it-works-integrations

Question 18

A company runs a web application on AWS. The web application delivers static content from an Amazon S3 bucket that is behind an Amazon CloudFront distribution. The application serves dynamic content by using an Application Load Balancer (ALB) that distributes requests to a fleet of Amazon EC2 instances in Auto Scaling groups. The application uses a domain name setup in Amazon Route 53. Some users reported occasional issues when the users attempted to access the website during peak hours. An operations team found that the ALB sometimes returned HTTP 503 Service Unavailable errors. The company wants to display a custom error message page when these errors occur. The page should be displayed immediately for this error code. Which solution will meet these requirements with the LEAST operational overhead?
Options
A: Set up a Route 53 failover routing policy. Configure a health check to determine the status of the ALB endpoint and to fail over to the failover S3 bucket endpoint.
B: Create a second CloudFront distribution and an S3 static website to host the custom error page. Set up a Route 53 failover routing policy. Use an active-passive configuration between the two distributions.
C: Create a CloudFront origin group that has two origins. Set the ALB endpoint as the primary origin. For the secondary origin, set an S3 bucket that is configured to host a static website Set up origin failover for the CloudFront distribution. Update the S3 static website to incorporate the custom error page.
D: Create a CloudFront function that validates each HTTP response code that the ALB returns. Create an S3 static website in an S3 bucket. Upload the custom error page to the S3 bucket as a failover. Update the function to read the S3 bucket and to serve the error page to the end users.
Show Answer
Correct Answer:
Create a CloudFront origin group that has two origins. Set the ALB endpoint as the primary origin. For the secondary origin, set an S3 bucket that is configured to host a static website Set up origin failover for the CloudFront distribution. Update the S3 static website to incorporate the custom error page.
Explanation
The most effective and operationally efficient solution is to use CloudFront's native origin failover capability. By configuring an origin group with the Application Load Balancer (ALB) as the primary origin and an Amazon S3 bucket (hosting the custom error page) as the secondary origin, CloudFront can automatically handle the failure. When the primary origin (ALB) returns an HTTP 503 status code, CloudFront will immediately and automatically retry the request with the secondary origin (S3). This serves the custom error page seamlessly to the user for that specific failed request without any DNS propagation delays or custom code management, thus meeting all requirements with the least operational overhead.
Why Incorrect Options are Wrong

A. Route 53 failover is not suitable for intermittent application-level errors. It relies on health checks that may not fail for 503 errors and is not immediate due to DNS TTLs.

B. This option has the same drawbacks as option A (DNS failover is not immediate) and adds the unnecessary complexity and cost of a second CloudFront distribution.

D. Using a CloudFront Function or Lambda@Edge to handle this requires writing, deploying, and maintaining custom code, which represents higher operational overhead than a simple configuration change like setting up an origin group.

References

1. AWS CloudFront Developer Guide - Optimizing high availability with CloudFront origin failover: This document explicitly states, "You can set up origin failover for scenarios that require high availability. To get started, you create an origin group with two origins: a primary and a secondary... If the primary origin is unavailable, or if it returns a specific HTTP response status code that indicates a failure, CloudFront automatically switches to the secondary origin." This directly supports the mechanism in option C.

2. AWS CloudFront Developer Guide - Origin group status codes: This section details the specific HTTP status codes (including 503 Service Unavailable) that can trigger a failover from the primary to the secondary origin within an origin group.

3. AWS Route 53 Developer Guide - Failover routing: This guide explains that failover routing works by "routing traffic to a resource when the resource is healthy and to a different resource when the first resource is unhealthy." This is based on health checks and DNS propagation, which is not immediate and less suitable for handling transient, per-request HTTP errors than CloudFront's origin failover.

4. AWS CloudFront Developer Guide - Comparing Lambda@Edge and CloudFront Functions: This documentation clarifies the capabilities of different edge compute options. It shows that implementing the logic described in option D would require Lambda@Edge, which is more complex and operationally intensive than using the built-in origin failover feature.

Question 19

A software as a service (SaaS) company uses AWS to host a service that is powered by AWS PrivateLink. The service consists of proprietary software that runs on three Amazon EC2 instances behind a Network Load Balancer (NL B). The instances are in private subnets in multiple Availability Zones in the eu-west-2 Region. All the company's customers are in eu-west-2. However, the company now acquires a new customer in the us-east-I Region. The company creates a new VPC and new subnets in us-east-I. The company establishes inter-Region VPC peering between the VPCs in the two Regions. The company wants to give the new customer access to the SaaS service, but the company does not want to immediately deploy new EC2 resources in us-east-I Which solution will meet these requirements?
Options
A: Configure a PrivateLink endpoint service in us-east-I to use the existing NL B that is in eu-west-2. Grant specific AWS accounts access to connect to theSaaS service.
B: Create an NL B in us-east-I . Create an IP target group that uses the IP addresses of the company's instances in eu-west-2 that host the SaaS service.Configure a PrivateLink endpoint service that uses the NLB that is in us-east-I . Grant specific AWS accounts access to connect to the SaaS service.
C: Create an Application Load Balancer (ALB) in front of the EC2 instances in eu-west-2. Create an NLB in us-east-I . Associate the NLB that is in us-east-Iwith an ALB target group that uses the ALB that is in eu-west-2. Configure a PrivateLink endpoint service that uses the NLB that is in us-east-I . Grantspecific AWS accounts access to connect to the SaaS service.
D: Use AWS Resource Access Manager (AWS RAM) to share the EC2 instances that are in eu-west-2. In us-east-I , create an NLB and an instance targetgroup that includes the shared EC2 instances from eu-west-2. Configure a PrivateLink endpoint service that uses the NL B that is in us-east-I. Grant specific AWS accounts access to connect to the SaaS service.
Show Answer
Correct Answer:
Create an NL B in us-east-I . Create an IP target group that uses the IP addresses of the company's instances in eu-west-2 that host the SaaS service.Configure a PrivateLink endpoint service that uses the NLB that is in us-east-I . Grant specific AWS accounts access to connect to the SaaS service.
Explanation
This solution correctly establishes a regional AWS PrivateLink presence for the customer in us-east-1 while leveraging the existing service infrastructure in eu-west-2. A Network Load Balancer (NLB) in us-east-1 can use an IP-based target group. Since Inter-Region VPC Peering is established, the NLB in us-east-1 can route traffic to the private IP addresses of the EC2 instances in eu-west-2. A new PrivateLink endpoint service is then created in us-east-1 and associated with this new NLB. This allows the customer in us-east-1 to create an interface endpoint in their VPC and privately access the service, with the traffic being securely routed across the peering connection.
Why Incorrect Options are Wrong

A. An AWS PrivateLink endpoint service is a regional construct and can only be associated with a Network Load Balancer that exists in the same AWS Region.

C. This option introduces an unnecessary Application Load Balancer (ALB), adding complexity and cost. The NLB can directly target the EC2 instances by IP address without an intermediary ALB.

D. An NLB's 'instance' target group can only register instances by their ID if they are in the same region as the NLB. To target resources in another region, an 'IP' target group must be used.

References

1. AWS Documentation - Network Load Balancer Target Groups: "You can register targets by instance ID or by IP address... If you specify targets using an IP address, you can use IP addresses from the subnets of the target group's VPC, or from any private IP address range from a peered VPC..." This supports using an IP target group over an Inter-Region VPC peering connection. (Source: AWS Documentation, User Guide for Network Load Balancers, section "Target groups for your Network Load Balancers", subsection "Register targets").

2. AWS Documentation - AWS PrivateLink Concepts: "An endpoint service is a service that you host in your VPC... When you create an endpoint service, you must specify a Network Load Balancer or Gateway Load Balancer for your service in each Availability Zone." This confirms the load balancer must be co-located with the endpoint service in the same region. (Source: AWS Documentation, AWS PrivateLink Guide, section "Concepts", subsection "Endpoint services").

3. AWS Documentation - VPC Peering Basics: "A VPC peering connection enables you to route traffic between the peered VPCs using private IPv4 addresses or IPv6 addresses... Instances in either VPC can communicate with each other as if they are within the same network." This confirms connectivity for the IP-based targets. (Source: AWS Documentation, Amazon VPC Peering Guide, section "What is VPC peering?").

Question 20

A company's CISO has asked a Solutions Architect to re-engineer the company's current CI/CD practices to make sure patch deployments to its applications can happen as quickly as possible with minimal downtime if vulnerabilities are discovered. The company must also be able to quickly roll back a change in case of errors. The web application is deployed in a fleet of Amazon EC2 instances behind an Application Load Balancer. The company is currently using GitHub to host the application source code, and has configured an AWS CodeBuild project to build the application. The company also intends to use AWS CodePipeline to trigger builds from GitHub commits using the existing CodeBuild project. What CI/CD configuration meets all of the requirements?
Options
A: Configure CodePipeline with a deploy stage using AWS CodeDeploy configured for in-place deployment. Monitor the newly deployed code, and, if there are any issues, push another code update.
B: Configure CodePipeline with a deploy stage using AWS CodeDeploy configured for blue/green deployments. Monitor the newly deployed code, and, if there are any issues, trigger a manual rollback using CodeDeploy.
C: Configure CodePipeline with a deploy stage using AWS CloudFormation to create a pipeline for test and production stacks. Monitor the newly deployed code, and, if there are any issues, push another code update.
D: Configure the CodePipeline with a deploy stage using AWS OpsWorks and in-place deployments. Monitor the newly deployed code, and, if there are any issues, push another code update.
Show Answer
Correct Answer:
Configure CodePipeline with a deploy stage using AWS CodeDeploy configured for blue/green deployments. Monitor the newly deployed code, and, if there are any issues, trigger a manual rollback using CodeDeploy.
Explanation
The core requirements are rapid deployment, minimal downtime, and quick rollback. An AWS CodeDeploy blue/green deployment strategy directly addresses these needs. In this model, a new fleet of instances (the "green" environment) is provisioned with the updated application code alongside the existing fleet (the "blue" environment). After testing, the Application Load Balancer (ALB) shifts traffic to the new green environment. This process results in near-zero downtime. If an issue is detected, a rollback is nearly instantaneous, as it only involves re-routing traffic back to the original, still-running blue environment. This is significantly faster and safer than redeploying a previous version.
Why Incorrect Options are Wrong

A. In-place deployments update instances one by one, causing downtime for each instance during the update. Rolling back requires a full, time-consuming redeployment of the old version.

C. AWS CloudFormation is an Infrastructure as Code (IaC) service, not a dedicated application deployment tool. Using it for application updates is less efficient, and rolling back by pushing another update is slow.

D. AWS OpsWorks is a configuration management service. Like an in-place deployment, it does not inherently provide a rapid, traffic-shifting rollback mechanism as required by the scenario.

References

1. AWS CodeDeploy User Guide, "Overview of a blue/green deployment": "A blue/green deployment is a deployment strategy in which you create a new environment (the green environment) that is a replica of your production environment (the blue environment)... This strategy allows you to test the new environment before you send production traffic to it. If there's a problem with the green environment, you can roll back to the blue environment immediately."

2. AWS CodeDeploy User Guide, "In-place versus blue/green deployments": This section explicitly contrasts the two methods, noting that for in-place deployments, "To roll back the application, you redeploy a previous revision of the application." For blue/green, "Rolling back is fast and easy. You can roll back to the original environment as soon as a problem is detected."

3. AWS Well-Architected Framework, Operational Excellence Pillar, "OPS 08 - How do you evolve operations?": The framework recommends deployment strategies that reduce the risk of failure, stating, "Use deployment strategies such as blue/green and canary deployments to reduce the impact of failed deployments." It highlights that blue/green deployments allow for rapid rollback by redirecting traffic.

4. AWS Whitepaper: "Blue/Green Deployments on AWS" (May 2020): Page 4 discusses the benefits, stating, "Blue/green deployments provide a number of advantages... It reduces downtime... It also reduces risk; if the new version of your application has issues, you can roll back to the previous version immediately."

Question 21

A company wants to migrate its on-premises data center to the AWS Cloud. This includes thousands of virtualized Linux and Microsoft Windows servers, SAN storage, Java and PHP applications with MYSQL, and Oracle databases. There are many dependent services hosted either in the same data center or externally. The technical documentation is incomplete and outdated. A solutions architect needs to understand the current environment and estimate the cloud resource costs after the migration. Which tools or services should solutions architect use to plan the cloud migration? (Choose three.)
Options
A: AWS Application Discovery Service
B: AWS SMS
C: AWS x-Ray
D: AWS Cloud Adoption Readiness Tool (CART)
E: Amazon Inspector
F: AWS Migration Hub
Show Answer
Correct Answer:
AWS Application Discovery Service, AWS Cloud Adoption Readiness Tool (CART), AWS Migration Hub
Explanation
This scenario requires a multi-faceted approach covering discovery, readiness assessment, and centralized planning. 1. AWS Application Discovery Service is essential for addressing the "incomplete and outdated" documentation. It automatically collects server specifications, performance data, and network dependency information from the on-premises environment. This data is crucial for right-sizing target instances and estimating costs. 2. AWS Cloud Adoption Readiness Tool (CART) assesses the organization's overall readiness for the cloud based on the AWS Cloud Adoption Framework (CAF). It helps identify gaps in skills, processes, and governance, which is a critical planning step before a large-scale migration. 3. AWS Migration Hub provides a central dashboard to track the entire migration process. It integrates with Application Discovery Service to visualize the discovered servers and dependencies, allowing the architect to group servers into applications and track their migration status.
Why Incorrect Options are Wrong

B. AWS SMS: AWS Server Migration Service (SMS) is a tool for executing the migration of virtual machines, not for the initial discovery, assessment, and planning phases.

C. AWS X-Ray: This service is used for analyzing and debugging performance issues in distributed applications, typically those already running in the cloud, not for pre-migration planning.

E. Amazon Inspector: This is a security vulnerability assessment service for workloads running on AWS. It is not used for discovering or planning the migration of on-premises infrastructure.

References

1. AWS Application Discovery Service: AWS Documentation, "What Is AWS Application Discovery Service?". It states, "AWS Application Discovery Service helps you plan your migration to the AWS Cloud by collecting usage and configuration data about your on-premises servers."

Source: AWS Documentation. (2023). What Is AWS Application Discovery Service?. AWS. Retrieved from https://docs.aws.amazon.com/application-discovery/latest/userguide/what-is-appdiscovery.html

2. AWS Cloud Adoption Readiness Tool (CART): AWS Cloud Adoption Framework Documentation. "The AWS Cloud Adoption Readiness Tool (CART) is a free, online self-assessment that helps you understand where you are in your cloud journey."

Source: AWS. (2023). AWS Cloud Adoption Framework. AWS. Retrieved from https://aws.amazon.com/cloud-adoption-framework/ (See section on CART).

3. AWS Migration Hub: AWS Documentation, "What is AWS Migration Hub?". It states, "AWS Migration Hub provides a single location to track the progress of application migrations across multiple AWS and partner solutions... Migration Hub also provides a portfolio of migration and modernization tools that simplify and accelerate your projects."

Source: AWS Documentation. (2023). What is AWS Migration Hub?. AWS. Retrieved from https://docs.aws.amazon.com/migrationhub/latest/userguide/what-is-migrationhub.html

4. AWS Server Migration Service (SMS): AWS Documentation, "What Is AWS Server Migration Service?". It describes the service as one that "automates the migration of your on-premises VMware vSphere or Microsoft Hyper-V/SCVMM virtual machines (VMs) to the AWS Cloud." This confirms its role in execution, not planning.

Source: AWS Documentation. (2023). What Is AWS Server Migration Service?. AWS. Retrieved from https://docs.aws.amazon.com/server-migration-service/latest/userguide/what-is-sms.html

Question 22

A solutions architect has launched multiple Amazon EC2 instances in a placement group within a single Availability Zone. Because of additional load on the system, the solutions architect attempts to add new instances to the placement group. However, the solutions architect receives an insufficient capacity error. What should the solutions architect do to troubleshoot this issue?
Options
A: Use a spread placement group. Set a minimum of eight instances for each Availability Zone.
B: Stop and start all the instances in the placement group. Try the launch again.
C: Create a new placement group. Merge the new placement group with the original placement group.
D: Launch the additional instances as Dedicated Hosts in the placement groups.
Show Answer
Correct Answer:
Stop and start all the instances in the placement group. Try the launch again.
Explanation
An insufficient capacity error when adding instances to a cluster placement group indicates that AWS cannot find contiguous underlying hardware to co-locate the new instance with the existing ones. A recommended troubleshooting strategy is to stop all instances within the placement group and then start them again. This action may migrate the entire group to a different set of underlying hardware that has sufficient capacity to accommodate both the existing and the new instances. This approach attempts to resolve the capacity issue without requiring architectural changes.
Why Incorrect Options are Wrong

A. A spread placement group has a hard limit of seven running instances per Availability Zone, so setting a minimum of eight is not possible. This also changes the architecture's intent.

C. AWS does not provide a feature to merge two placement groups. This is not a valid operation.

D. Launching on Dedicated Hosts is a significant architectural and cost change. It is not the most direct or common first step to troubleshoot a transient capacity error.

References

1. Amazon EC2 User Guide for Linux Instances: Under the section "Troubleshoot placement groups," the guide explicitly states: "If you receive a capacity error when launching an instance in a placement group that already has running instances, stop and start all of the instances in the placement group, and then try the launch again. Starting the instances may migrate them to hardware that has capacity for all the requested instances." (See section: Placement groups > Troubleshoot placement groups).

2. Amazon EC2 User Guide for Linux Instances: The section on "Spread placement groups" details the limitations: "A spread placement group can span multiple Availability Zones within the same Region, and you can have a maximum of seven running instances per Availability Zone per group." This confirms that option A is invalid. (See section: Placement groups > Spread placement groups).

3. AWS API Reference - CreatePlacementGroup: The documentation for creating and managing placement groups does not include any action or parameter for merging existing groups, confirming that option C is not a valid AWS feature. (See the CreatePlacementGroup and related EC2 API actions in the AWS Command Line Interface Reference or SDK documentation).

Question 23

A company has multiple AWS accounts. The company recently had a security audit that revealed many unencrypted Amazon Elastic Block Store (Amazon EBS) volumes attached to Amazon EC2 instances. A solutions architect must encrypt the unencrypted volumes and ensure that unencrypted volumes will be detected automatically in the future. Additionally, the company wants a solution that can centrally manage multiple AWS accounts with a focus on compliance and security. Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
Options
A: Create an organization in AWS Organizations. Set up AWS Control Tower, and turn on the strongly recommended guardrails. Join all accounts to the organization. Categorize the AWS accounts into OUs.
B: Use the AWS CLI to list all the unencrypted volumes in all the AWS accounts. Run a script to encrypt all the unencrypted volumes in place.
C: Create a snapshot of each unencrypted volume. Create a new encrypted volume from the unencrypted snapshot. Detach the existing volume, and replace it with the encrypted volume.
D: Create an organization in AWS Organizations. Set up AWS Control Tower, and turn on the mandatory guardrails. Join all accounts to the organization. Categorize the AWS accounts into OUs.
E: Turn on AWS CloudTrail. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule to detect and automatically encrypt unencrypted volumes.
Show Answer
Correct Answer:
Create an organization in AWS Organizations. Set up AWS Control Tower, and turn on the strongly recommended guardrails. Join all accounts to the organization. Categorize the AWS accounts into OUs., Create a snapshot of each unencrypted volume. Create a new encrypted volume from the unencrypted snapshot. Detach the existing volume, and replace it with the encrypted volume.
Explanation
The solution must address two distinct requirements: remediating existing unencrypted Amazon Elastic Block Store (Amazon EBS) volumes and implementing a forward-looking governance strategy for a multi-account environment. Option C correctly outlines the standard, supported procedure for encrypting an existing unencrypted EBS volume. This involves creating a snapshot, using that snapshot to create a new, encrypted volume, and then replacing the original volume on the instance. Option A establishes the required central governance and compliance framework. AWS Control Tower is designed to set up and govern a secure, multi-account AWS environment. Activating the "strongly recommended" guardrails includes a detective control (Detect whether encryption is enabled for EBS volumes attached to EC2 instances) that automatically detects unencrypted volumes, fulfilling the future detection and central management requirements.
Why Incorrect Options are Wrong

B: It is not possible to encrypt an existing EBS volume "in place." The process requires creating a new, encrypted volume from a snapshot of the original.

D: The specific guardrail for detecting unencrypted EBS volumes is part of the "strongly recommended" guardrail set, not the "mandatory" set. This option would fail to meet the detection requirement.

E: While technically feasible for a single account, this approach does not provide the centralized governance and compliance management across multiple accounts that AWS Control Tower offers, which is a core requirement.

References

1. Encrypting an unencrypted volume: AWS Documentation, Amazon EC2 User Guide for Linux Instances, section "Encrypting an unencrypted volume". It states, "To encrypt an unencrypted volume, you must create a snapshot of the volume. Then, you can either restore the snapshot to a new, encrypted volume... or you can create an encrypted copy of the snapshot and restore it to a new, encrypted volume." This supports option C.

2. AWS Control Tower Guardrails: AWS Documentation, AWS Control Tower User Guide, section "Guardrail reference". The guide lists the guardrail Detect whether encryption is enabled for EBS volumes attached to EC2 instances (Identifier: AWS-GREBSVOLUMEENCRYPTIONMANDATORY) under the "Strongly recommended" behavior category. This supports option A and invalidates option D.

3. AWS Control Tower Overview: AWS Documentation, AWS Control Tower User Guide, section "What is AWS Control Tower?". It describes Control Tower as "the easiest way to set up and govern a secure, multi-account AWS environment," which aligns with the requirement for a central management solution.

Question 24

A company wants to use Amazon S3 to back up its on-premises file storage solution. The company's on-premises file storage solution supports NFS, and the company wants its new solution to support NFS. The company wants to archive the backup files after 5 days. If the company needs archived files for disaster recovery, t he company is willing to wait a few days for the retrieval of those files. Which solution meets these requirements MOST cost-effectively?
Options
A: Deploy an AWS Storage Gateway files gateway that is associated with an S3 bucket. Move the files from the on-premises file storage solution to the file gateway. Create an S3 Lifecycle rule to move the file to S3 Standard-Infrequent Access (S3 Standard-IA) after 5 days.
B: Deploy an AWS Storage Gateway volume gateway that is associated with an S3 bucket. Move the files from the on-premises file storage solution to the volume gateway. Create an S3 Lifecycle rule to move the files to S3 Glacier Deep Archive after 5 days.
C: Deploy an AWS Storage Gateway tape gateway that is associated with an S3 bucket. Move the files from the on-premises file storage solution to the tape gateway. Create an S3 Lifecycle rule to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) after 5 days.
D: Deploy an AWS Storage Gateway file gateway that is associated with an S3 bucket. Move the files from the on-premises file storage solution to the tape gateway. Create an S3 Lifecycle rule to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) after 5 days.
E: Deploy an AWS Storage Gateway file gateway that is associated with an S3 bucket. Move the files from the on-premises file storage solution to the file gateway. Create an S3 Lifecycle rule to move the files to S3 Glacier Deep Archive after 5 days.
Show Answer
Correct Answer:
Deploy an AWS Storage Gateway file gateway that is associated with an S3 bucket. Move the files from the on-premises file storage solution to the file gateway. Create an S3 Lifecycle rule to move the files to S3 Glacier Deep Archive after 5 days.
Explanation
The solution requires an NFS interface for the on-premises backup, which is provided by the AWS Storage Gateway File Gateway. This gateway stores files as objects in an associated S3 bucket. To meet the archival and cost-effectiveness requirements, the data should be moved to the lowest-cost storage tier. The company is willing to wait a few days for retrieval, which aligns perfectly with the retrieval timeframe (typically within 12-48 hours) and the ultra-low storage cost of S3 Glacier Deep Archive. An S3 Lifecycle policy can be configured to automatically transition objects from S3 Standard to S3 Glacier Deep Archive after 5 days, fulfilling all requirements most cost-effectively.
Why Incorrect Options are Wrong

A: S3 Standard-IA is not the most cost-effective archival storage class; S3 Glacier Deep Archive offers significantly lower storage costs, fitting the retrieval time tolerance.

B: A Volume Gateway provides block-level storage via the iSCSI protocol, not the required file-level access via NFS.

C: A Tape Gateway presents a virtual tape library (VTL) interface, which is not compatible with the required NFS protocol.

D: This option is logically flawed as it suggests deploying a File Gateway but then moving files to a Tape Gateway, which are distinct and incompatible components.

References

1. AWS Storage Gateway User Guide, "What is Amazon S3 File Gateway?": "Amazon S3 File Gateway presents a file-based interface to Amazon S3... With a file gateway, you can store and retrieve Amazon S3 objects through standard file protocols such as Network File System (NFS) and Server Message Block (SMB)." This confirms File Gateway is the correct choice for NFS support.

2. Amazon S3 User Guide, "Amazon S3 storage classes": The comparison table in this section shows that S3 Glacier Deep Archive is the "lowest-cost object storage class for long-term retention" and has a "first-byte latency" of "Hours". This aligns with the cost and retrieval requirements.

3. Amazon S3 User Guide, "Managing your storage lifecycle": "You can define rules to transition objects from one storage class to another... For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you create them, or archive objects to the S3 Glacier Deep Archive storage class 60 days after you create them." This confirms the use of lifecycle rules for transitioning to S3 Glacier Deep Archive.

Question 25

A research company is running daily simul-ations in the AWS Cloud to meet high demand. The simu- lations run on several hundred Amazon EC2 instances that are based on Amazon Linux 2. Occasionally, a simu-lation gets stuck and requires a cloud operations engineer to solve the problem by connecting to an EC2 instance through SSH. Company policy states that no EC2 instance can use the same SSH key and that all connections must be logged in AWS CloudTrail. How can a solutions architect meet these requirements?
Options
A: Launch new EC2 instances, and generate an individual SSH key for each instance. Store the SSH key in AWS Secrets Manager. Create a new IAM policy, and attach it tothe engineers' IAM role with an Allow statement for the GetSecretValue action. Instruct the engineers to fetch the SSH key from Secrets Manager when they connect through any SSH client.
B: Create an AWS Systems Manager document to run commands on EC2 instances to set a new unique SSH key. Create a new IAM policy, and attach it to the engineers' IAM role with an Allow statement to run Systems Manager documents. Instruct the engineers to run the document to set an SSH key and to connect through any SSH client.
C: Launch new EC2 instances without setting up any SSH key for the instances. Set up EC2 Instance Connect on each instance. Create a new IAM policy, and attach it to the engineers' IAM role with an Allow statement for the SendSSHPublicKey action. Instruct the engineers to connect to the instance by using a browser-based SSH client from the EC2 console.
D: Set up AWS Secrets Manager to store the EC2 SSH key. Create a new AWS Lambda function to create a new SSH key and to call AWS Systems Manager Session Manager to set the SSH key on the EC2 instance. Configure Secrets Manager to use the Lambda function for automatic rotation once daily. Instruct the engineers to fetch the SSH key from Secrets Manager when they connect through any SSH client.
Show Answer
Correct Answer:
Launch new EC2 instances without setting up any SSH key for the instances. Set up EC2 Instance Connect on each instance. Create a new IAM policy, and attach it to the engineers' IAM role with an Allow statement for the SendSSHPublicKey action. Instruct the engineers to connect to the instance by using a browser-based SSH client from the EC2 console.
Explanation
EC2 Instance Connect is designed specifically for this use case. It enhances security by removing the need to manage and distribute long-lived SSH keys. Instead, it uses IAM policies to control SSH access. An engineer with the appropriate IAM permissions uses the SendSSHPublicKey API action to push a temporary, one-time-use public key to the instance's metadata. This key is only valid for 60 seconds. Crucially, every SendSSHPublicKey API call is logged in AWS CloudTrail, fulfilling the requirement for a complete audit trail of all connection requests. This approach ensures that each connection is uniquely authorized and logged, without storing persistent keys on the instances.
Why Incorrect Options are Wrong

A. This method fails the requirement to log all connections in CloudTrail. While fetching the key from Secrets Manager is logged, the subsequent SSH connection from the engineer's client to the EC2 instance is standard network traffic and is not an AWS API call, so it will not be logged by CloudTrail.

B. Similar to option A, running the Systems Manager document is logged in CloudTrail, but the actual SSH connection made afterward is not. This fails the comprehensive logging requirement.

D. This solution is overly complex and misuses Session Manager, which is designed to provide shell access without SSH keys. Furthermore, it fails the connection logging requirement for the same reason as options A and B: the final SSH connection is not an auditable AWS API call.

References

1. AWS Compute Blog, "New: Using Amazon EC2 Instance Connect for SSH access to your EC2 Instances": "All EC2 Instance Connect API calls are logged by AWS CloudTrail, giving you the visibility you need for governance and compliance." This directly supports the logging requirement. The article also explains, "EC2 Instance Connect does not require the instance to have a public IPv4 address." and "You can use IAM policies to grant and revoke access."

2. AWS EC2 User Guide for Linux Instances, "Connect to your Linux instance with EC2 Instance Connect": "When you connect to an instance using EC2 Instance Connect, the Instance Connect API pushes a one-time-use SSH public key to the instance metadata where it remains for 60 seconds. An IAM policy attached to your IAM user authorizes your user to push the public key to the instance metadata." This confirms the use of temporary, unique keys for connections.

3. AWS EC2 User Guide for Linux Instances, "Set up EC2 Instance Connect": This section details the prerequisites, including the IAM permission ec2-instance-connect:SendSSHPublicKey on the instance resource, which is the action that gets logged in CloudTrail. It states, "All connection requests using EC2 Instance Connect are logged to AWS CloudTrail so you can audit connection requests."

Question 26

A financial services company has an asset management product that thousands of customers use around the world. The customers provide feedback about the product through surveys. The company is building a new analytical solution that runs on Amazon EMR to analyze the data from these surveys. The following user personas need to access the analytical solution to perform different actions: • Administrator: Provisions the EMR cluster for the analytics team based on the team's requirements • Data engineer: Runs E TL scripts to process, transform, and enrich the datasets • Data analyst: Runs SQL and Hive queries on the data A solutions architect must ensure that all the user personas have least privilege access to only the resources that they need. The user personas must be able to launch only applications that are approved and authorized. The solution also must ensure tagging for all resources that the user personas create. Which solution will meet these requirements?
Options
A: Create IAM roles for each user persona. Attach identity-based policies to define which actions the user who assumes the role can perform. Create an AWSConfig rule to check for noncompliant resources. Configure the rule to notify the administrator to remediate the noncompliant resources.
B: Set up Kerberos-based authentication for EMR clusters upon launch. Specify a Kerberos security configuration along with cluster-specific Kerberos options.
C: Use AWS Service Catalog to control the Amazon EMR versions available for deployment, the cluster configuration, and the permissions for each user persona.
D: Launch the EMR cluster by using AWS CloudFormation. Attach resource-based policies to the EMR cluster during cluster creation. Create an AWS Config rule to check for noncompliant clusters and noncompliant Amazon S3 buckets. Configure the rule to notify the administrator to remediate the noncompliant resources.
Show Answer
Correct Answer:
Use AWS Service Catalog to control the Amazon EMR versions available for deployment, the cluster configuration, and the permissions for each user persona.
Explanation
AWS Service Catalog is designed to create and manage a catalog of IT services that are approved for use on AWS. This directly addresses the core requirements of the question. An administrator can define pre-configured Amazon EMR clusters as "products" in the catalog, specifying approved applications, versions, and configurations. By using launch constraints, specific IAM roles can be associated with these products, enforcing least privilege for each user persona (administrator, data engineer, data analyst). Furthermore, Service Catalog can enforce mandatory tagging on all provisioned resources, ensuring compliance with tagging policies. This provides a proactive governance solution that prevents the launch of non-compliant resources, rather than just detecting them after the fact.
Why Incorrect Options are Wrong

A. This is a reactive, not a preventative, solution. AWS Config detects non-compliant resources after they have been created and notifies an administrator, failing to ensure users can only launch approved applications.

B. Kerberos provides authentication for users and services within an EMR cluster (e.g., for Hadoop services). It does not control the provisioning of the cluster itself, its configuration, or AWS resource tagging.

D. This approach is also reactive. While CloudFormation standardizes deployment, relying on AWS Config for enforcement means non-compliant clusters can still be launched, with remediation occurring only after detection.

References

1. AWS Service Catalog Administrator Guide, "What Is AWS Service Catalog?": "AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS... You can control which IT services and versions are available, the configuration of the available services, and permission access by individual, group, department, or cost center." This directly supports the use of Service Catalog for controlling approved configurations and permissions.

2. AWS Service Catalog Administrator Guide, "Launch constraints": "A launch constraint specifies the IAM role that AWS Service Catalog assumes when an end user launches a product. Without a launch constraint, AWS Service Catalog assumes the end user's IAM role for all of the product's AWS resources." This explains how Service Catalog enforces least privilege for different personas.

3. AWS Service Catalog Administrator Guide, "TagOption library": "A TagOption is a key-value pair that allows administrators to... enforce the creation of tags on provisioned products." This confirms its capability to enforce mandatory tagging.

4. AWS Config Developer Guide, "What Is AWS Config?": "AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources... Config continuously monitors and records your AWS resource configurations..." This establishes AWS Config as a detective control, which is less suitable than the preventative control required by the question.

5. Amazon EMR Management Guide, "Use Kerberos for authentication with Amazon EMR": "With Amazon EMR, you can provision a Kerberized cluster that integrates with the cluster's Hadoop applications, such as YARN, HDFS, and Hive, to provide authentication between hosts and services in your cluster." This reference clarifies that Kerberos is for in-cluster authentication, not AWS resource provisioning governance.

Question 27

A company operates a fleet of servers on premises and operates a fleet of Amazon EC2 instances in its organization in AWS Organizations. The company's AWS accounts contain hundreds of VPCs. The company wants to connect its AWS accounts to its on-premises network. AWS Site-to-Site VPN connections are already established to a single AWS account. The company wants to control which VPCs can communicate with other VPCs. Which combination of steps will achieve this level of control with the LEAST operational effort? (Choose three.)
Options
A: Create a transit gateway in an AWS account. Share the transit gateway across accounts by using AWS Resource Access Manager (AWS RAM).
B: Configure attachments to all VPCs and VPNs.
C: Set up transit gateway route tables. Associate the VPCs and VPNs with the route tables.
D: Configure VPC peering between the VPCs.
E: Configure attachments between the VPCs and VPNs.
F: Set up route tables on the VPCs and VPNs.
Show Answer
Correct Answer:
Create a transit gateway in an AWS account. Share the transit gateway across accounts by using AWS Resource Access Manager (AWS RAM)., Configure attachments to all VPCs and VPNs., Set up transit gateway route tables. Associate the VPCs and VPNs with the route tables.
Explanation
The most scalable and operationally efficient solution for connecting hundreds of VPCs across multiple accounts to an on-premises network is AWS Transit Gateway. 1. Create and Share (A): A Transit Gateway acts as a central cloud router. Creating it in one account and sharing it across the organization using AWS Resource Access Manager (RAM) avoids creating complex connections in each account. 2. Attach Resources (B): To route traffic through the Transit Gateway, each VPC and the Site-to-Site VPN connection must be connected to it via an attachment. This establishes the physical connectivity to the central hub. 3. Control Traffic (C): Transit Gateway route tables provide granular control over traffic flow. By associating different attachments with specific route tables, you can define which VPCs can communicate with each other and with the on-premises network, fulfilling the control requirement.
Why Incorrect Options are Wrong

D. VPC peering creates a full mesh of connections that is unmanageable and operationally expensive for hundreds of VPCs, violating the "least operational effort" requirement.

E. This statement is technically inaccurate. Attachments are configured to connect VPCs and VPNs to a Transit Gateway, not directly between each other.

F. While VPC route tables must be updated to point to the Transit Gateway, the central control for inter-VPC and on-premises traffic is managed by the Transit Gateway route tables, as stated in option C.

References

1. AWS Documentation: AWS Transit Gateway. "A transit gateway is a network transit hub that you can use to interconnect your virtual private clouds (VPCs) and on-premises networks... You can share your transit gateway with other AWS accounts using AWS Resource Access Manager (AWS RAM)." (Source: AWS Transit Gateway User Guide, "What is a transit gateway?")

2. AWS Documentation: Transit gateway attachments. "To use your transit gateway, you must create an attachment for your network resources... You can create the following attachments: VPC... VPN" (Source: AWS Transit Gateway User Guide, "Transit gateway attachments")

3. AWS Documentation: Transit gateway route tables. "A transit gateway has a default route table and can optionally have additional route tables. A route table inside a transit gateway determines the next-hop for the packet... By default, the VPCs and VPN connections are associated with the default transit gateway route table." To implement custom routing and isolation, you create separate route tables and manage associations. (Source: AWS Transit Gateway User Guide, "Routing")

4. AWS Whitepaper: Building a Scalable and Secure Multi-VPC AWS Network Infrastructure. This paper discusses the limitations of VPC peering at scale and presents AWS Transit Gateway as the recommended solution for a hub-and-spoke network topology, highlighting its scalability and centralized routing control. (See section: "AWS Transit Gateway")

Question 28

A company hosts a VPN in an on-premises data center. Employees currently connect to the VPN to access files in their Windows home directories. Recently, there has been a large growth in the number of employees who work remotely. As a result, bandwidth usage for connections into the data center has begun to reach 100% during business hours. The company must design a solution on AWS that will support the growth of the company's remote workforce, reduce the bandwidth usage for connections into the data center, and reduce operational overhead. Which combination of steps will meet these requirements with the LEAST operational overhead? (Select TWO.)
Options
A: Create an AWS Storage Gateway Volume Gateway. Mount a volume from the Volume Gateway to the on-premises file server.
B: Migrate the home directories to Amazon FSx for Windows File Server.
C: Migrate the home directories to Amazon FSx for Lustre.
D: Migrate remote users to AWS Client VPN
E: Create an AWS Direct Connect connection from the on-premises data center to AWS.
Show Answer
Correct Answer:
Migrate the home directories to Amazon FSx for Windows File Server., Migrate remote users to AWS Client VPN
Explanation
The core problem is the on-premises data center's internet bandwidth being saturated by remote employees accessing on-premises Windows file shares. The optimal solution is to move both the file shares and the VPN access point to AWS. Migrating the home directories to Amazon FSx for Windows File Server (B) moves the data to a fully managed, native Windows file system on AWS. This eliminates the need for an on-premises file server. Migrating remote users to AWS Client VPN (D) provides a managed VPN solution that allows users to connect directly to the AWS environment. Combining these two steps means remote users connect via AWS Client VPN to access their files on Amazon FSx, completely bypassing the on-premises data center. This directly reduces data center bandwidth usage, supports growth, and minimizes operational overhead by using managed services.
Why Incorrect Options are Wrong

A. A Volume Gateway provides block storage to an on-premises server; users would still connect to the on-premises data center, not solving the bandwidth issue.

C. Amazon FSx for Lustre is a high-performance file system for workloads like HPC and is not the appropriate service for general-purpose Windows home directories.

E. AWS Direct Connect establishes a dedicated connection between the on-premises data center and AWS but does not solve the ingress bandwidth bottleneck from remote users.

References

1. Amazon FSx for Windows File Server Documentation: "Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Server Message Block (SMB) protocol. ... Common use cases include home directories, user shares, and application file shares."

Source: AWS Documentation, "What is Amazon FSx for Windows File Server?", Introduction.

2. AWS Client VPN Documentation: "AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources and resources in your on-premises network. With Client VPN, you can access your resources from any location using an OpenVPN-based VPN client."

Source: AWS Documentation, "What is AWS Client VPN?", Introduction.

3. Comparing Amazon FSx for Windows File Server and Amazon FSx for Lustre: "FSx for Windows File Server is designed for a broad set of Windows-based applications and workloads... FSx for Lustre is designed for speed and is ideal for high-performance computing (HPC), machine learning, and media data processing workflows."

Source: AWS Documentation, "Amazon FSx - FAQs", "When should I use Amazon FSx for Windows File Server vs. Amazon FSx for Lustre?".

4. AWS Storage Gateway (Volume Gateway) Documentation: "A volume gateway represents the family of gateways that support block-based volumes, previously referred to as gateway-cached and gateway-stored volumes. ... You can back up your local data to the volumes in AWS." This confirms it provides block storage to on-premises applications, not a direct file access solution for remote users.

Source: AWS Documentation, "How Volume Gateway works".

Question 29

A company has hundreds of AWS accounts. The company uses an organization in AWS Organizations to manage all the accounts. The company has turned on all features. A finance team has allocated a daily budget for AWS costs. The finance team must receive an email notification if the organization's AWS costs exceed 80% of the allocated budget. A solutions architect needs to implement a solution to track the costs and deliver the notifications. Which solution will meet these requirements?
Options
A: In the organization's management account, use AWS Budgets to create a budget that has a daily period. Add an alert threshold and set the value to 80%. Use Amazon Simple Notification Service (Amazon SNS) to notify the finance team.
B: In the organization’s management account, set up the organizational view feature for AWS Trusted Advisor. Create an organizational view report for cost optimization.Set an alert threshold of 80%. Configure notification preferences. Add the email addresses of the finance team.
C: Register the organization with AWS Control Tower. Activate the optional cost control (guardrail). Set a control (guardrail) parameter of 80%. Configure control (guardrail) notification preferences. Use Amazon Simple Notification Service (Amazon SNS) to notify the finance team.
D: Configure the member accounts to save a daily AWS Cost and Usage Report to an Amazon S3 bucket in the organization's management account. Use Amazon EventBridge to schedule a daily Amazon Athena query to calculate the organization’s costs. Configure Athena to send an Amazon CloudWatch alert if the total costs are more than 80% of the allocated budget. Use Amazon Simple Notification Service (Amazon SNS) to notify the finance team.
Show Answer
Correct Answer:
In the organization's management account, use AWS Budgets to create a budget that has a daily period. Add an alert threshold and set the value to 80%. Use Amazon Simple Notification Service (Amazon SNS) to notify the finance team.
Explanation
AWS Budgets is the designated service for monitoring costs against a specified amount and triggering alerts. By creating a cost budget in the organization's management account, the company can track the aggregated costs of all member accounts. The service allows for setting a daily period, which aligns with the finance team's requirement. Configuring an alert for 80% of the budgeted amount and integrating it with Amazon SNS provides a direct, managed, and efficient mechanism to send the required email notifications to the finance team without building a custom data processing pipeline.
Why Incorrect Options are Wrong

B. AWS Trusted Advisor provides cost optimization recommendations by identifying unused or idle resources; it does not track spending against a predefined budget threshold for alerting.

C. AWS Control Tower guardrails are for enforcing governance policies and detecting non-compliant resources, not for monitoring and alerting on spending against a specific budget amount.

D. This is an overly complex and expensive solution. It requires building and maintaining a custom data query and alerting pipeline, whereas AWS Budgets provides this functionality as a managed service.

References

1. AWS Budgets User Guide, "Managing your costs with AWS Budgets": This document states, "You can use AWS Budgets to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount." This directly supports the use of AWS Budgets for the required alerting.

2. AWS Budgets User Guide, "Creating a cost budget": This section details the process of setting up a budget, specifying the period (e.g., Daily), and setting the budgeted amount. It also describes how to "Configure alerts" based on actual costs reaching a percentage of the budget.

3. AWS Cost Management User Guide, "Monitoring your usage and costs": This guide explains that the management account in an organization can use AWS Cost Management features, including AWS Budgets, to view the combined costs of all accounts.

4. AWS Trusted Advisor Documentation, "AWS Trusted Advisor check reference": The "Cost Optimization" section lists checks like "Amazon RDS Idle DB Instances" and "Low Utilization Amazon EC2 Instances," confirming its role is to provide recommendations, not budget-based alerting.

5. AWS Control Tower User Guide, "How guardrails work": This document describes guardrails as "pre-packaged governance rules for security, operations, and cost management." Their function is policy enforcement, not dynamic budget tracking and alerting.

Question 30

A company is migrating mobile banking applications to run on Amazon EC2 instances in a VPC. Backend service applications run in an on-premises data center. The data center has an AWS Direct Connect connection into AWS. The applications that run in the VPC need to resolve DNS requests to an on-premises Active Directory domain that runs in the data center. Which solution will meet these requirements with the LEAST administrative overhead?
Options
A: Provision a set of EC2 instances across two Availability Zones in the VPC as caching DNS servers to resolve DNS queries from the application servers within the VPC.
B: Provision an Amazon Route 53 private hosted zone. Configure NS records that point to on- premises DNS servers.
C: Create DNS endpoints by using Amazon Route 53 Resolver Add conditional forwarding rules to resolve DNS namespaces between the on-premises data center and the VPC.
D: Provision a new Active Directory domain controller in the VPC with a bidirectional trust between this new domain and the on-premises Active Directory domain.
Show Answer
Correct Answer:
Create DNS endpoints by using Amazon Route 53 Resolver Add conditional forwarding rules to resolve DNS namespaces between the on-premises data center and the VPC.
Explanation
Amazon Route 53 Resolver is the AWS managed service designed specifically for hybrid cloud DNS resolution. By creating an outbound endpoint in the VPC, you establish a path for DNS queries to leave the VPC. A conditional forwarding rule is then configured to direct queries for the specific on-premises domain (e.g., corp.example.com) to the IP addresses of the on-premises Active Directory DNS servers via the Direct Connect connection. This solution is fully managed, highly available, and scalable by AWS, thereby meeting the requirement for the least administrative overhead.
Why Incorrect Options are Wrong

A. Provisioning and managing DNS servers on EC2 instances requires manual setup, patching, scaling, and high-availability configuration, which constitutes significant administrative overhead.

B. An Amazon Route 53 private hosted zone is used when Route 53 is the authoritative DNS service for a domain within a VPC. It is not used for forwarding queries to an external resolver.

D. Deploying new Active Directory domain controllers and configuring a trust relationship is a complex infrastructure task that is excessive for solving only a DNS resolution requirement.

References

1. AWS Documentation - What is Amazon Route 53 Resolver?: "With Resolver, you can set up rules to conditionally forward requests to DNS resolvers on your remote network... This functionality lets you resolve DNS names for resources in your on-premises data center." This directly supports the use of conditional forwarding for the described scenario.

Source: AWS Documentation, Amazon Route 53 Developer Guide, "What is Amazon Route 53 Resolver?".

2. AWS Documentation - Resolving DNS queries between VPCs and your network: "To forward DNS queries from your VPCs to your network... you create a Route 53 Resolver outbound endpoint and a forwarding rule." This outlines the exact components described in the correct answer.

Source: AWS Documentation, Amazon Route 53 Developer Guide, "Resolving DNS queries between VPCs and your network", Section: "Forwarding outbound DNS queries to your network".

3. AWS Documentation - Simplifying DNS management in a hybrid cloud with Amazon Route 53 Resolver: This whitepaper explains the architecture: "For outbound DNS queries (from VPC to on-premises), you create a Route 53 Resolver outbound endpoint... You then create a rule that specifies the domain name for the DNS queries that you want to forward... and the IP addresses of the DNS resolvers in your on-premises network."

Source: AWS Whitepaper, Simplifying DNS management in a hybrid cloud with Amazon Route 53 Resolver, Page 5.

4. AWS Documentation - Working with private hosted zones: "A private hosted zone is a container that holds information about how you want to route traffic for a domain and its subdomains within one or more Amazon Virtual Private Clouds (Amazon VPCs)." This confirms that its purpose is authoritative resolution within a VPC, not forwarding.

Source: AWS Documentation, Amazon Route 53 Developer Guide, "Working with private hosted zones".

Question 31

A company is deploying a distributed in-memory database on a fleet of Amazon EC2 instances. The fleet consists of a primary node and eight worker nodes. The primary node is responsible for monitoring cluster health, accepting user requests, distributing user requests to worker nodes, and sending an aggregate response back to a client. Worker nodes communicate with each other to replicate data partitions. The company requires the lowest possible networking latency to achieve maximum performance. Which solution will meet these requirements?
Options
A: Launch memory optimized EC2 instances in a partition placement group.
B: Launch compute optimized EC2 instances in a partition placement group.
C: Launch memory optimized EC2 instances in a cluster placement group
D: Launch compute optimized EC2 instances in a spread placement group.
Show Answer
Correct Answer:
Launch memory optimized EC2 instances in a cluster placement group
Explanation
The scenario describes a distributed in-memory database, a workload that processes large datasets in memory and requires frequent, low-latency communication between its nodes. The most suitable EC2 instance type for this is memory-optimized, as they are specifically designed for such workloads. To meet the core requirement of the "lowest possible networking latency," a cluster placement group is the correct choice. Cluster placement groups co-locate instances on the same high-bisection bandwidth rack within a single Availability Zone, which is the ideal strategy for tightly-coupled, node-to-node communication that demands minimal network delay and high throughput.
Why Incorrect Options are Wrong

A. A partition placement group spreads instances across different racks to reduce correlated failures, which does not provide the lowest possible network latency required for this workload.

B. This option is incorrect because compute-optimized instances are not the best fit for an in-memory database, and a partition placement group prioritizes fault tolerance over lowest latency.

D. A spread placement group places each instance on distinct hardware, maximizing separation for high availability, which is the opposite of the low-latency, co-location requirement.

References

1. AWS Documentation, EC2 User Guide for Linux Instances, "Placement groups": It states, "A cluster placement group is a logical grouping of instances within a single Availability Zone... This strategy enables workloads to achieve the low-latency, high-throughput network performance required for tightly-coupled, node-to-node communication typical of HPC applications." This directly supports using a cluster placement group for the lowest latency.

2. AWS Documentation, "Amazon EC2 Instance Types": The documentation for Memory Optimized instances (e.g., R, X families) states, "Memory optimized instances are designed to deliver fast performance for workloads that process large data sets in memory." This validates the choice of memory-optimized instances for an in-memory database.

3. AWS Well-Architected Framework, Performance Efficiency Pillar whitepaper (July 2023), Page 29, "PERF 5: How do you select your compute solution?": Under the "Networking characteristics" section, the whitepaper advises, "For workloads that require low network latency, high network throughput, or both, between nodes, you can use cluster placement groups." This confirms the best practice for the scenario's performance requirement.

Question 32

A company needs to aggregate Amazon CloudWatch logs from its AWS accounts into one central logging account. The collected logs must remain in the AWS Region of creation. The central logging account will then process the logs, normalize the logs into standard output format, and stream the output logs to a security tool for more processing. A solutions architect must design a solution that can handle a large volume of logging data that needs to be ingested. Less logging will occur outside normal business hours than during normal business hours. The logging solution must scale with the anticipated load. The solutions architect has decided to use an AWS Control Tower design to handle the multi-account logging process. Which combination of steps should the solutions architect take to meet the requirements? (Select THREE.)
Options
A: Create a destination Amazon Kinesis data stream in the central logging account.
B: Create a destination Amazon Simple Queue Service (Amazon SQS) queue in the central logging account.
C: Create an IAM role that grants Amazon CloudWatch Logs the permission to add data to the Amazon Kinesis data stream. Create a trust policy. Specify thetrust policy in the IAM role. In each member account, create a subscription filter for each log group to send data to the Kinesis data stream.
D: Create an IAM role that grants Amazon CloudWatch Logs the permission to add data to the Amazon Simple Queue Service (Amazon SQS) queue. Create atrust policy. Specify the trust policy in the IAM role. In each member account, create a single subscription filter for all log groups to send datato the SQSqueue.
E: Create an AWS Lambda function. Program the Lambda function to normalize the logs in the central logging account and to write the logs to the security tool.
F: Create an AWS Lambda function. Program the Lambda function to normalize the logs in the member accounts and to write the logs to the security tool.
Show Answer
Correct Answer:
Create a destination Amazon Kinesis data stream in the central logging account., Create an IAM role that grants Amazon CloudWatch Logs the permission to add data to the Amazon Kinesis data stream. Create a trust policy. Specify thetrust policy in the IAM role. In each member account, create a subscription filter for each log group to send data to the Kinesis data stream., Create an AWS Lambda function. Program the Lambda function to normalize the logs in the central logging account and to write the logs to the security tool.
Explanation
This solution outlines a standard and scalable pattern for centralizing CloudWatch logs. Amazon Kinesis Data Streams is the appropriate service for ingesting a large, variable volume of real-time streaming data, meeting the scalability requirement. The cross-account log delivery is achieved by creating a CloudWatch Logs subscription filter in each source account's log group. This requires an IAM role in the destination (central) account that the CloudWatch Logs service from the source accounts can assume to gain kinesis:PutRecord permissions. Finally, an AWS Lambda function in the central account serves as a scalable, serverless processor. It is triggered by the Kinesis stream to normalize the log data and forward it to the external security tool, fulfilling the processing requirement.
Why Incorrect Options are Wrong

B: Amazon SQS is a message queuing service, not a real-time data streaming service. Kinesis Data Streams is the purpose-built service for this high-throughput, ordered streaming use case.

D: This is incorrect because CloudWatch Logs subscription filters cannot send data directly to an SQS queue. Furthermore, a subscription filter must be created for each log group individually.

F: The requirement is to process and normalize logs in the central logging account. This option incorrectly proposes performing this function in the individual member accounts.

References

1. AWS Documentation, Amazon CloudWatch Logs User Guide: "Real-time processing of log data with subscriptions." This document explicitly states, "You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it delivered to other services such as an Amazon Kinesis stream... for custom processing, analysis, or loading to other systems." It details the use of Kinesis as a destination.

2. AWS Documentation, Amazon CloudWatch Logs User Guide: "Cross-account log data sharing with subscriptions." This section provides a step-by-step guide for the required setup. It specifies creating a destination Kinesis stream in the receiving account (Step 1) and creating an IAM role in the receiving account that grants the sending account permission to put data into the stream (Step 2).

3. AWS Documentation, AWS Lambda Developer Guide: "Using AWS Lambda with Amazon Kinesis." This guide describes the common event-driven pattern where "Lambda polls the stream periodically... and when it detects new records, it invokes your Lambda function by passing the new records as a payload." This confirms Lambda's role as a scalable processor for Kinesis streams.

4. AWS Documentation, Amazon Kinesis Data Streams Developer Guide: "What Is Amazon Kinesis Data Streams?" The introduction states, "Amazon Kinesis Data Streams is a scalable and durable real-time data streaming service... You can use Kinesis Data Streams for rapid and continuous data intake and aggregation." This validates its suitability for the high-volume, variable load requirement.

Question 33

A company has an organization in AWS Organizations that includes a separate AWS account for each of the company's departments. Application teams from different departments develop and deploy solutions independently. The company wants to reduce compute costs and manage costs appropriately across departments. The company also wants to improve visibility into billing for individual departments. The company does not want to lose operational flexibility when the company selects compute resources. Which solution will meet these requirements?
Options
A: Use AWS Budgets for each department. Use Tag Editor to apply tags to appropriate resources. Purchase EC2 Instance Savings Plans.
B: Configure AWS Organizations to use consolidated billing. Implement a tagging strategy that identifies departments. Use SCPs to apply tags to appropriateresources. Purchase EC2 Instance Savings Plans.
C: Configure AWS Organizations to use consolidated billing. Implement a tagging strategy that identifies departments. Use Tag Editor to apply tags to appropriate resources. Purchase Compute Savings Plans.
D: Use AWS Budgets for each department. Use SCPs to apply tags to appropriate resources. Purchase Compute Savings Plans.
Show Answer
Correct Answer:
Configure AWS Organizations to use consolidated billing. Implement a tagging strategy that identifies departments. Use Tag Editor to apply tags to appropriate resources. Purchase Compute Savings Plans.
Explanation
This solution correctly addresses all requirements. AWS Organizations with consolidated billing is the standard for managing costs across multiple accounts. A tagging strategy, implemented using tools like Tag Editor, is essential for allocating costs and improving billing visibility for individual departments. The most critical requirement is reducing compute costs without losing operational flexibility. Compute Savings Plans are ideal for this, as they automatically apply to EC2, Fargate, and Lambda usage across any instance family, size, OS, or region. This provides significant savings while allowing application teams the freedom to choose the most appropriate compute resources for their needs.
Why Incorrect Options are Wrong

A. EC2 Instance Savings Plans are too restrictive; they commit to a specific instance family and region, which violates the requirement for operational flexibility.

B. This option is incorrect for two reasons: Service Control Policies (SCPs) enforce tagging policies but do not apply tags, and EC2 Instance Savings Plans lack the required flexibility.

D. Service Control Policies (SCPs) are used to enforce permissions and constraints (e.g., requiring a tag), not to apply tags to resources directly.

---

References

1. AWS Savings Plans User Guide, "Overview of Savings Plans": This document explicitly contrasts the two types of Savings Plans. It states, "Compute Savings Plans provide the most flexibility... These plans automatically apply to EC2 instance usage regardless of instance family, size, AZ, Region, OS or tenancy... EC2 Instance Savings Plans... provide the lowest prices... in exchange for commitment to a specific instance family in a chosen Region." This directly supports the choice of Compute Savings Plans for flexibility.

2. AWS Billing and Cost Management User Guide, "Using cost allocation tags": This guide explains, "After you activate cost allocation tags, AWS uses the tags to organize your resource costs on your cost allocation report, making it easier for you to categorize and track your AWS costs." This validates the use of a tagging strategy for departmental cost visibility.

3. AWS Resource Groups and Tag Editor User Guide, "What Is Tag Editor?": This documentation describes the tool's function: "With Tag Editor, you can add, edit, or delete tags for multiple AWS resources at once." This confirms Tag Editor is an appropriate tool for implementing the tagging strategy.

4. AWS Organizations User Guide, "Service control policies (SCPs)": The documentation clarifies the function of SCPs: "SCPs are a type of organization policy that you can use to manage permissions in your organization... SCPs don't grant permissions." This confirms that SCPs cannot be used to apply tags, only to enforce policies that might require them.

5. AWS Well-Architected Framework, "Cost Optimization Pillar" (Whitepaper, Page 21): The whitepaper discusses purchasing options, stating, "Compute Savings Plans provide the most flexibility and help to reduce your costs... This automatically applies to any EC2 instance usage regardless of instance family, size, AZ, Region, OS or tenancy." This reinforces that Compute Savings Plans are the best practice for flexible cost reduction.

Question 34

A large payroll company recently merged with a small staffing company. The unified company now has multiple business units, each with its own existing AWS account. A solutions architect must ensure that the company can centrally manage the billing and access policies for all the AWS accounts. The solutions architect configures AWS Organizations by sending an invitation to all member accounts of the company from a centralized management account. What should the solutions architect do next to meet these requirements?
Options
A: Create the OrganizationAccountAccess IAM group in each member account. Include the necessary IAM roles for each administrator.
B: Create the OrganizationAccountAccessPoIicy IAM policy in each member account. Connect the member accounts to the management account by using cross-account access.
C: Create the OrganizationAccountAccessRoIe IAM role in each member account. Grant permission to the management account to assume the IAM role.
D: Create the OrganizationAccountAccessRoIe IAM role in the management account. Attach the AdministratorAccess AWS managed policy to the IAM role.Assign the IAM role to the administrators in each member account.
Show Answer
Correct Answer:
Create the OrganizationAccountAccessRoIe IAM role in each member account. Grant permission to the management account to assume the IAM role.
Explanation
To enable centralized management from the AWS Organizations management account, an IAM role must exist in each member account that the management account can assume. When an existing AWS account accepts an invitation to join an organization, this role, named OrganizationAccountAccessRole by convention, is not created automatically. The solutions architect must manually create this IAM role in each invited member account. The role's trust policy must be configured to explicitly grant the management account's ID the permission to assume the role, thereby enabling cross-account administrative access.
Why Incorrect Options are Wrong

A. IAM groups are containers for IAM users, not roles. Creating a group in a member account does not establish a trust relationship for the management account to access it.

B. An IAM policy defines permissions but does not grant them on its own. It must be attached to an identity (like a role) to be effective for cross-account access.

D. The OrganizationAccountAccessRole must be created in the account that needs to be managed (the member account), not in the account that is performing the management (the management account).

References

1. AWS Organizations User Guide: In the section on "Accessing and administering the member accounts in your organization," the guide specifies the process for invited accounts. It states, "When you invite an existing account to join your organization, AWS does not automatically create the OrganizationAccountAccessRole IAM role in the account. You must manually create the role..." This document details the steps, which include creating the role in the member account and establishing a trust policy for the management account. (See: AWS Organizations User Guide, section "Creating the OrganizationAccountAccessRole in an invited member account").

2. AWS Identity and Access Management (IAM) User Guide: The guide explains the fundamental mechanism for cross-account access. "You can use roles to delegate access to users or services that normally don't have access to your AWS resources... In this scenario, the account that owns the resources is the trusting account [member account] and the account that contains the users is the trusted account [management account]." This confirms the role must be in the member account. (See: AWS IAM User Guide, section "How to use an IAM role to delegate access across AWS accounts").

3. AWS Whitepaper - AWS Multiple Account Security Strategy: This whitepaper discusses best practices for multi-account environments. It reinforces the use of cross-account roles for centralized access and administration, stating, "To enable cross-account access, you create roles in the accounts you want to access (member accounts) and grant IAM principals in the accounts you want to grant access from (management or delegated administrator accounts) permissions to assume those roles." (See: AWS Multiple Account Security Strategy, section "Centralized access management").

Question 35

A team of data scientists is using Amazon SageMaker instances and SageMaker APIs to train machine learning (ML) models. The SageMaker instances are deployed in a VPC that does not have access to or from the internet. Datasets for ML model training are stored in an Amazon S3 bucket. Interface VPC endpoints provide access to Amazon S3 and the SageMaker APIs. Occasionally, the data scientists require access to the Python Package Index (PyPl) repository to update Python packages that they use as part of their workflow. A solutions architect must provide access to the PyPI repository while ensuring that the SageMaker instances remain isolated from the internet. Which solution will meet these requirements?
Options
A: Create an AWS CodeCommit repository for each package that the data scientists need to access. Configure code synchronization between the PyPl repositoryand the CodeCommit repository. Create a VPC endpoint for CodeCommit.
B: Create a NAT gateway in the VPC. Configure VPC routes to allow access to the internet with a network ACL that allows access to only the PyPl repositoryendpoint.
C: Create a NAT instance in the VPC. Configure VPC routes to allow access to the internet. Configure SageMaker notebook instance firewall rules that allow access to only the PyPI repository endpoint.
D: Create an AWS CodeArtifact domain and repository. Add an external connection for public:pypi to the CodeArtifact repository. Configure the Python client touse the CodeArtifact repository. Create a VPC endpoint for CodeArtifact.
Show Answer
Correct Answer:
Create an AWS CodeArtifact domain and repository. Add an external connection for public:pypi to the CodeArtifact repository. Configure the Python client touse the CodeArtifact repository. Create a VPC endpoint for CodeArtifact.
Explanation
AWS CodeArtifact is a fully managed artifact repository service that makes it easy for organizations to securely store, publish, and share software packages. It can be configured with an external connection to public repositories like the Python Package Index (PyPI). By creating a CodeArtifact repository and connecting it to PyPI, you create a private, managed proxy for the required packages. Access to this CodeArtifact repository can be secured within the VPC using an interface VPC endpoint. This allows the SageMaker instances to pull packages from your CodeArtifact repository without requiring any route to the public internet, thus satisfying the strict requirement that the instances remain isolated. The Python client (pip) on the instances is then configured to use the CodeArtifact repository endpoint instead of the public PyPI URL.
Why Incorrect Options are Wrong

A: AWS CodeCommit is a source control service, not a package repository. Manually synchronizing packages would be operationally complex and is not the intended use of the service.

B: A NAT gateway provides general outbound internet access, which directly violates the requirement that the SageMaker instances remain isolated from the internet.

C: A NAT instance, like a NAT gateway, provides a route to the public internet, which contradicts the core security requirement of keeping the instances isolated.

References

1. AWS CodeArtifact User Guide, "Connect a CodeArtifact repository to a public repository": This document explains how to configure a CodeArtifact repository with an external connection to public repositories such as PyPI. It states, "When you connect a CodeArtifact repository to a public repository... CodeArtifact can fetch packages from the public repository on demand."

2. AWS CodeArtifact User Guide, "Using CodeArtifact with VPC endpoints": This section details how to use interface VPC endpoints to connect directly to CodeArtifact from within a VPC without traversing the internet. It notes, "You can improve the security of your build and deployment processes by configuring AWS PrivateLink for CodeArtifact. By creating interface VPC endpoints, you can connect to CodeArtifact from your VPC without sending traffic over the public internet."

3. AWS Documentation, "VPC endpoints": This documentation clarifies the purpose of interface VPC endpoints. "An interface endpoint is an elastic network interface with a private IP address that serves as an entry point for traffic destined to a supported AWS service or a VPC endpoint service." This confirms that traffic stays on the AWS network.

4. Amazon SageMaker Developer Guide, "Connect to SageMaker Through a VPC Interface Endpoint": This guide describes the pattern of using VPC endpoints to allow SageMaker resources in a private VPC to access AWS services without internet access, which is the same architectural pattern proposed in the correct answer for accessing CodeArtifact.

Shopping Cart
Scroll to Top

FLASH OFFER

Days
Hours
Minutes
Seconds

avail $6 DISCOUNT on YOUR PURCHASE