Boost your SAA-C03 exam preparation with our free and accurate, updated-for-2025 questions.
Cert Empire is dedicated to offering the best, most current exam questions for those studying for the AWS SAA-C03 exam. To help students prepare, we’ve made some of our SAA-C03 exam prep resources free. You can get plenty of practice with our Free SAA-C03 Practice Test.
Options
A:
Set the Auto Scaling group's minimum capacity to two. Deploy one On-Demand Instance in one
Availability Zone and one On-Demand Instance in a second Availability Zone.
B:
Set the Auto Scaling group's minimum capacity to four Deploy two On-Demand Instances in one
Availability Zone and two On-Demand Instances in a second Availability Zone
C:
Set the Auto Scaling group's minimum capacity to two. Deploy four Spot Instances in one
Availability Zone.
D:
Set the Auto Scaling group's minimum capacity to four Deploy two On-Demand Instances in one
Availability Zone and two Spot Instances in a second Availability Zone.
Show Answer
Correct Answer:
Set the Auto Scaling group's minimum capacity to two. Deploy one On-Demand Instance in one
Availability Zone and one On-Demand Instance in a second Availability Zone.
Explanation
The primary requirements are to maintain a minimum of two running instances and to ensure high availability and fault tolerance. Setting the Auto Scaling group's minimum capacity to two directly satisfies the instance count requirement. To achieve high availability, the architecture must withstand the failure of a single component, such as an Availability Zone (AZ). By configuring the Auto Scaling group to launch instances across two separate AZs, the application is protected from an AZ-level outage. If one AZ fails, the instance in the other AZ continues to serve traffic, and the Auto Scaling group will automatically launch a replacement instance in a healthy AZ to meet the minimum of two.
References
1. AWS Auto Scaling User Guide, Section: "Distribute instances across Availability Zones". The documentation states, "By launching your instances in separate Availability Zones, you can protect your applications from the failure of a single location... When one Availability Zone becomes unhealthy or unavailable, Amazon EC2 Auto Scaling launches new instances in an unaffected Availability Zone."
2. AWS Auto Scaling User Guide, Section: "Set scaling limits for your Auto Scaling group". This section explains the function of minimum capacity: "The minimum capacity is the minimum number of instances that you want in your Auto Scaling group." This directly supports setting the minimum to two to meet the requirement.
3. AWS Well-Architected Framework - Reliability Pillar (July 2023), Page 23, Section: "Deploy the workload to multiple locations". The framework advises, "For a regional service, you can increase availability by deploying the workload to multiple AZs within an AWS Region... If one AZ fails, the workload in other AZs can continue to operate."
4. Amazon EC2 User Guide, Section: "Instance purchasing options". This guide describes On-Demand Instances as suitable for "applications with short-term, spiky, or unpredictable workloads that cannot be interrupted," which aligns with the needs of a production application, unlike Spot Instances.
Options
A:
Create dedicated S3 access points and access point policies for each application.
B:
Create an S3 Batch Operations job to set the ACL permissions for each object in the S3 bucket
C:
Replicate the objects in the S3 bucket to new S3 buckets for each application. Create replication
rules by prefix
D:
Replicate the objects in the S3 bucket to new S3 buckets for each application Create dedicated S3
access points for each application
Show Answer
Correct Answer:
Create dedicated S3 access points and access point policies for each application.
Explanation
Amazon S3 Access Points are the ideal solution for this scenario. They are unique hostnames that you can create to enforce distinct permissions for any request made to a shared S3 bucket. By creating a dedicated access point for each application, you can attach a specific access point policy that restricts access to that application's unique prefix. This approach provides granular, prefix-level control within a single bucket, directly meeting the requirements. It significantly simplifies permissions management compared to a single, complex bucket policy and avoids the high cost and management complexity of data duplication, thus having the least operational overhead.
References
1. Amazon S3 User Guide, "Managing data access with Amazon S3 access points": "Amazon S3 access points are named network endpoints that are attached to buckets that you can use to perform S3 object operations... Each access point has distinct permissions and network controls that S3 applies for any request that is made through that access point." This document explicitly details how access points simplify managing access for shared datasets.
2. Amazon S3 User Guide, "Should I use a bucket policy or an access point policy?": "For a shared dataset with hundreds of applications, creating and managing a single bucket policy can be challenging. With S3 Access Points, you can create and manage application-specific policies without having to change the bucket policy." This directly supports using access points for multiple applications accessing a single bucket.
3. Amazon S3 User Guide, "Access control list (ACL) overview": "We recommend using S3 bucket policies or IAM policies for access control. Amazon S3 ACLs is a legacy access control mechanism...". This confirms that using ACLs (as suggested in option B) is not the recommended best practice.
4. Amazon S3 User Guide, "Replication": While replication is a powerful feature for data redundancy and geographic distribution, using it for access control as suggested in options C and D is an anti-pattern that increases cost and operational complexity, contradicting the question's core requirement.
Options
A:
Purchase an EC2 Instance Savings Plan for Amazon EC2 and SageMaker.
B:
Purchase a Compute Savings Plan for Amazon EC2. Lambda, and SageMaker
C:
Purchase a SageMaker Savings Plan
D:
Purchase a Compute Savings Plan for Lambda, Fargate, and Amazon EC2
E:
Purchase an EC2 Instance Savings Plan for Amazon EC2 and Fargate
Show Answer
Correct Answer:
Purchase a SageMaker Savings Plan, Purchase a Compute Savings Plan for Lambda, Fargate, and Amazon EC2
Explanation
The goal is to cover four distinct compute services (EC2, Lambda, Fargate, SageMaker) with the fewest possible savings plans.
A Compute Savings Plan is the most flexible option, providing discounts on Amazon EC2, AWS Fargate, and AWS Lambda usage. This single plan efficiently covers three of the four required services.
Amazon SageMaker usage is not covered by Compute Savings Plans. To gain savings on SageMaker, a dedicated SageMaker Savings Plan must be purchased. This plan applies specifically to SageMaker ML instance usage.
Therefore, the combination of a Compute Savings Plan (for EC2, Fargate, Lambda) and a SageMaker Savings Plan (for SageMaker) covers all specified services with only two plans, meeting the requirements.
References
1. AWS Savings Plans User Guide, "What are Savings Plans?": This official guide provides a comparison table that explicitly states what each plan covers.
Compute Savings Plans apply to: "EC2 instances across regions, Fargate, and Lambda".
SageMaker Savings Plans apply to: "SageMaker instance usage".
EC2 Instance Savings Plans apply to: "EC2 instance family in a region".
This directly supports that a Compute SP covers EC2, Fargate, and Lambda, while a separate SageMaker SP is needed for SageMaker.
2. Amazon SageMaker Pricing, "SageMaker Savings Plans" section: "Amazon SageMaker Savings Plans offer a flexible, usage-based pricing model for Amazon SageMaker... These plans automatically apply to eligible SageMaker ML instance usage including SageMaker Studio notebooks, SageMaker On-Demand notebooks, SageMaker Processing, SageMaker Data Wrangler, SageMaker Training, SageMaker Real-Time Inference, and SageMaker Batch Transform." This confirms SageMaker requires its own dedicated savings plan.
3. AWS Compute Blog, "Introducing Compute Savings Plans": "Compute Savings Plans are a new and flexible pricing model that provide savings up to 66% on your AWS compute usage. These plans automatically apply to your Amazon EC2 instances, and your AWS Fargate and AWS Lambda usage." This source confirms the services covered by a Compute Savings Plan, notably excluding SageMaker.
Options
A:
Create an Amazon Simple Queue Service (Amazon SQS> queue for each validation step. Create a
new Lambda function to transform the order data to the format that each validation step requires
and to publish the messages to the appropriate SQS queues Subscribe each validation step Lambda
function to its corresponding SQS queue
B:
Create an Amazon Simple Notification Service {Amazon SNS) topic. Subscribe the validation step
Lambda functions to the SNS topic. Use message body filtering to send only the required data to each
subscribed Lambda function.
C:
Create an Amazon EventBridge event bus. Create an event rule for each validation step Configure
the input transformer to send only the required data to each target validation step Lambda function.
D:
Create an Amazon Simple Queue Service {Amazon SQS) queue Create a new Lambda function to
subscribe to the SQS queue and to transform the order data to the format that each validation step
requires. Use the new Lambda function to perform synchronous invocations of the validation step
Lambda functions in parallel on separate threads.
Show Answer
Correct Answer:
Create an Amazon EventBridge event bus. Create an event rule for each validation step Configure
the input transformer to send only the required data to each target validation step Lambda function.
Explanation
Amazon EventBridge is designed for building loosely coupled, event-driven architectures. An EventBridge event bus can receive the initial order event. You can then create a separate rule for each validation step. Each rule can filter events (if necessary) and route them to the appropriate target Lambda function. The key feature that meets the requirement is the Input Transformer. This feature allows you to customize the event payload sent to the target, extracting and reshaping only the necessary fields from the original order event. This ensures each validation Lambda receives only the subset of data it requires, adhering to the principle of least privilege while maintaining a decoupled design.
References
1. Amazon EventBridge User Guide, "Transforming event content with input transformation": This document explicitly describes the Input Transformer feature. It states, "You can use input transformation to customize the text from an event before you send it to the target of a rule... you can create a custom payload that includes only the information you want to pass to the target." This directly supports the chosen answer (C).
2. AWS Lambda Developer Guide, "Invoking Lambda functions": This guide details invocation types. For synchronous invocation (RequestResponse), it states, "When you invoke a function synchronously, Lambda runs the function and waits for a response." This waiting process creates a dependency, or tight coupling, which is contrary to the question's requirements and makes option (D) incorrect.
3. Amazon Simple Notification Service Developer Guide, "Amazon SNS message filtering": This guide explains that filter policies are applied to message attributes. It states, "By default, a subscription receives every message published to the topic. To receive a subset of the messages, a subscriber must assign a filter policy to the subscription." The examples clearly show policies matching against key-value pairs in the attributes, not the message body, making option (B) incorrect.
4. AWS Well-Architected Framework, "Decouple components": This design principle, part of the Reliability Pillar, advocates for architectures where components are loosely coupled. It states, "The failure of a single component should not cascade to other components." The synchronous invocation in option (D) and the central transformer in option (A) create tighter coupling than the event-routing pattern of EventBridge.
Options
A:
Use third-party backup software with an AWS Storage Gateway tape gateway virtual tape library.
B:
Use AWS Backup to configure and monitor all backups for the services in use
C:
Use AWS Config to set lifecycle management to take snapshots of all data sources on a schedule.
D:
Use AWS Systems Manager State Manager to manage the configuration and monitoring of backup
tasks.
Show Answer
Correct Answer:
Use AWS Backup to configure and monitor all backups for the services in use
Explanation
AWS Backup is a fully managed, policy-based service that centralizes and automates data protection across AWS services. It is the ideal native solution for this scenario as it supports Amazon EC2 instances, Amazon RDS databases, and Amazon DynamoDB tables. By creating backup plans, the university can define backup schedules, retention policies, and lifecycle rules from a single console. This eliminates the need for custom scripts and provides a centralized, automated way to manage and monitor backups, directly fulfilling the university's requirements.
References
1. AWS Backup Developer Guide: "What is AWS Backup?" - This section introduces AWS Backup as a "fully managed backup service that makes it easy to centralize and automate the backup of data across AWS services in the cloud and on premises." It explicitly lists Amazon EC2, Amazon RDS, and Amazon DynamoDB as supported services.
Source: AWS Backup Developer Guide, "What is AWS Backup?".
2. AWS Backup Product Page: "AWS Backup Features" - The documentation highlights "Centralized backup management" and "Policy-based backup" as key features, allowing users to "configure backup policies and monitor backup activity for your AWS resources in one place."
Source: AWS Backup official product page, "Features" section.
3. AWS Config Developer Guide: "What Is AWS Config?" - This guide states, "AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources." This confirms its purpose is configuration monitoring, not backup execution.
Source: AWS Config Developer Guide, "What Is AWS Config?".
4. AWS Systems Manager User Guide: "What is AWS Systems Manager?" - The documentation describes Systems Manager as the "operations hub for your AWS applications and resources," focusing on tasks like patch management and configuration management, not as a centralized data backup service.
Source: AWS Systems Manager User Guide, "What is AWS Systems Manager?".
Options
A:
Copy the required data to a common account. Create an IAM access role in that account Grant
access by specifying a permission policy that includes users from the engineering team accounts as
trusted entities.
B:
Use the Lake Formation permissions Grant command in each account where the data is stored to
allow the required engineering team users to access the data.
C:
Use AWS Data Exchange to privately publish the required data to the required engineering team
accounts
D:
Use Lake Formation tag-based access control to authorize and grant cross-account permissions for
the required data to the engineering team accounts
Show Answer
Correct Answer:
Use Lake Formation tag-based access control to authorize and grant cross-account permissions for
the required data to the engineering team accounts
Explanation
AWS Lake Formation is designed to simplify building and managing data lakes, including secure, cross-account data sharing. The most efficient method to meet the requirements is using Lake Formation's tag-based access control (TBAC). This allows the data science team to assign tags (e.g., access-level:engineering) to specific databases, tables, or columns. They can then create a single grant policy that gives the engineering team's AWS account access to any resource with that specific tag. This approach is highly scalable, avoids data duplication, and significantly reduces the operational overhead of managing individual resource permissions, especially as the data lake grows.
References
1. AWS Lake Formation Developer Guide - Lake Formation tag-based access control: This document states, "Lake Formation tag-based access control (TBAC) is an authorization strategy that defines permissions based on attributes, which are called tags... This helps when you have a large number of data catalog resources and principals to manage." It also details how to grant cross-account permissions using tags. (See section: "Granting permissions on Data Catalog resources").
2. AWS Lake Formation Developer Guide - Sharing data across AWS accounts: This guide explains the two main methods for cross-account sharing: the Named Resource method and the Tag-Based Access Control (TBAC) method. It highlights TBAC as a scalable approach. (See section: "Cross-account data sharing in Lake Formation").
3. AWS Big Data Blog - Simplify and scale your data governance with AWS Lake Formation tag-based access control: This article provides a detailed walkthrough and states, "TBAC is a scalable way to manage permissions in AWS Lake Formation... With TBAC, you can grant permissions on Lake Formation resources to principals in the same account or other accounts..." (See section: "Cross-account sharing with TBAC").
Options
A:
Use default server-side encryption with Amazon S3 managed encryption keys (SSE-S3) to store the
sensitive data
B:
Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new
key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
C:
Create an AWS managed key by using AWS Key Management Service {AWS KMS) Use the new key
to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
D:
Download S3 objects to an Amazon EC2 instance. Encrypt the objects by using customer managed
keys. Upload the encrypted objects back into Amazon S3.
Show Answer
Correct Answer:
Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new
key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
Explanation
The requirement is for full control over the encryption key lifecycle (creation, rotation, disabling) with minimal effort. AWS Key Management Service (AWS KMS) with a customer managed key (CMK) is the only option that meets these criteria. Customer managed keys are created, owned, and managed directly by the customer, providing granular control over their policies, rotation schedules, and enabled/disabled status. Using server-side encryption with AWS KMS keys (SSE-KMS) integrates this control directly with Amazon S3, fulfilling the "minimal effort" requirement by offloading the encryption and decryption processes to AWS servers without needing a custom client-side solution.
References
1. AWS Key Management Service Developer Guide: Under "AWS KMS concepts," the guide distinguishes between key types. For customer managed keys, it states, "You have full control over these KMS keys, including establishing and maintaining their key policies, IAM policies, and grants, enabling and disabling them, rotating their cryptographic material..." In contrast, for AWS managed keys, it notes, "...you cannot change the properties of these KMS keys, rotate them, or change their key policies."
Source: AWS KMS Developer Guide, "AWS KMS concepts," Section: "KMS keys."
2. Amazon S3 User Guide: This guide details the different server-side encryption options. For SSE-KMS, it explains, "Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) is similar to SSE-S3, but with some additional benefits and charges for using this service. There are separate permissions for the use of a KMS key that provides an additional layer of control as well as an audit trail." This highlights the control and minimal effort of the integrated service.
Source: Amazon S3 User Guide, "Protecting data using server-side encryption," Section: "Using server-side encryption with AWS KMS keys (SSE-KMS)."
3. Amazon S3 User Guide: When describing SSE-S3, the documentation clarifies the lack of customer control: "When you use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3), each object is encrypted with a unique key. As an additional safeguard, it encrypts the key itself with a root key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data." The management and rotation are handled entirely by AWS.
Source: Amazon S3 User Guide, "Protecting data using server-side encryption," Section: "Using server-side encryption with Amazon S3-managed keys (SSE-S3)."
Options
A:
Use the Instance Scheduler on AWS to configure start and stop schedules.
B:
Turn off automatic backups. Create weekly manual snapshots of the database.
C:
Create a custom AWS Lambda function to start and stop the database based on minimum CPU
utilization.
D:
Purchase All Upfront reserved DB instances
Show Answer
Correct Answer:
Use the Instance Scheduler on AWS to configure start and stop schedules.
Explanation
The most effective way to optimize costs for a resource with a predictable usage schedule, such as an Amazon RDS instance used only during business hours, is to stop it when it is not needed. When an RDS DB instance is stopped, you are not billed for instance hours, only for provisioned storage. The Instance Scheduler on AWS is an AWS-provided solution that automates the starting and stopping of Amazon EC2 and RDS instances on a defined schedule. This directly addresses the requirements to reduce costs based on the usage pattern and minimizes operational overhead by using a pre-built, managed solution.
References
1. Stopping an Amazon RDS DB instance temporarily: "While your DB instance is stopped, you are charged for provisioned storage... but not for DB instance hours."
Source: AWS Documentation, Amazon RDS User Guide, "Stopping an Amazon RDS DB instance temporarily", Section: "Billing for a stopped DB instance".
2. Instance Scheduler on AWS: "The Instance Scheduler on AWS is a solution that automates the starting and stopping of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon RDS) instances."
Source: AWS Solutions Library, Instance Scheduler on AWS, "Solution overview" section.
3. Amazon RDS Reserved Instances: "Amazon RDS Reserved Instances (RIs) give you the option to reserve a DB instance for a one- or three-year term and in turn receive a significant discount compared to the On-Demand Instance pricing for the DB instance." (This is ideal for steady-state usage).
Source: AWS Documentation, Amazon RDS User Guide, "Working with reserved DB instances", "Overview of reserved DB instances" section.
Options
A:
Use AWS Elastic Beanstalk to host the static content and the PHP application. Configure Elastic
Beanstalk to deploy its EC2 instance into a public subnet Assign a public IP address.
B:
Use AWS Lambda to host the static content and the PHP application. Use an Amazon API Gateway
REST API to proxy requests to the Lambda function. Set the API Gateway CORSconfiguration to
respond to the domain name. Configure Amazon ElastiCache for Redis to handle session information
C:
Keep the backend code on the EC2 instance. Create an Amazon ElastiCache for Redis cluster that
has Multi-AZ enabled Configure the ElastiCache for Redis cluster in cluster mode Copy the frontend
resources to Amazon S3 Configure the backend code to reference the EC2 instance
D:
Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is
configured to host the static content. Configure an Application Load Balancer that targets an Amazon
Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application.
Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple
Availability Zones
Show Answer
Correct Answer:
Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is
configured to host the static content. Configure an Application Load Balancer that targets an Amazon
Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application.
Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple
Availability Zones
Explanation
This solution correctly decouples the application into three tiers (static content, application logic, session state) and uses the most appropriate AWS managed services for high availability. Amazon S3 with an Amazon CloudFront distribution is the best practice for hosting and accelerating static content globally. An Application Load Balancer distributing traffic to an Amazon ECS service using AWS Fargate tasks provides a scalable and highly available serverless compute layer for the PHP application across multiple Availability Zones. Finally, an Amazon ElastiCache for Redis cluster with Multi-AZ enabled provides a resilient, managed, and externalized session store, which is critical for a stateful, horizontally-scaled application.
References
1. Static Content Hosting: AWS Documentation, Amazon S3 User Guide, "Hosting a static website using Amazon S3". This guide details the standard practice of using S3 for static assets. The addition of CloudFront is a best practice for performance and availability, as described in the Amazon CloudFront Developer Guide, "Using CloudFront with an Amazon S3 origin".
2. Application Hosting: AWS Documentation, Amazon ECS Developer Guide, "What is Amazon Elastic Container Service?". This section explains how ECS with Fargate allows you to run containers without managing servers, and the Elastic Load Balancing User Guide details how an Application Load Balancer distributes traffic across targets (like Fargate tasks) in multiple Availability Zones for high availability.
3. Session Management: AWS Documentation, Amazon ElastiCache for Redis User Guide, "Minimizing downtime in ElastiCache with Multi-AZ". This document explains how enabling Multi-AZ provides enhanced high availability and automatic failover for the Redis cluster, making it suitable for critical session data.
4. Architectural Best Practices: AWS Whitepaper, AWS Well-Architected Framework, "Reliability Pillar". This whitepaper emphasizes designing for high availability by removing single points of failure and using managed services that offer built-in reliability, which aligns with the architecture proposed in option D.
Options
A:
Use S3 Event Notifications to write a message with image details to an Amazon Simple Queue
Service (Amazon SQS) queue. Configure an AWS Lambda function to read the messages from the
queue and to process the images
B:
Use S3 Event Notifications to write a message with image details to an Amazon Simple Queue
Service (Amazon SQS) queue Configure an EC2 Reserved Instance to read the messages from the
queue and to process the images.
C:
Use S3 Event Notifications to publish a message with image details to an Amazon Simple
Notification Service (Amazon SNS) topic. Configure a container instance in Amazon Elastic Container
Service (Amazon ECS) to subscribe to the topic and to process the images.
D:
Use S3 Event Notifications to publish a message with image details to an Amazon Simple
Notification Service (Amazon SNS) topic. to subscribe to the topic and to process the images.
Show Answer
Correct Answer:
Use S3 Event Notifications to write a message with image details to an Amazon Simple Queue
Service (Amazon SQS) queue. Configure an AWS Lambda function to read the messages from the
queue and to process the images
Explanation
This solution is the most cost-effective because it employs a fully serverless, event-driven architecture. AWS Lambda's pricing model is based on the number of requests and the duration of execution, measured in milliseconds. Since the application only incurs costs when an image is actually being processed, there is no charge for idle time. This "pay-for-what-you-use" model is ideal for sporadic workloads like image uploads. Using Amazon SQS to queue the events from S3 provides a durable and reliable buffer, ensuring that events are not lost if the processing function fails, and allows for controlled, asynchronous processing by the Lambda function. The memory (512 MB) and time (2 minutes) requirements are well within Lambda's default limits.
References
1. AWS Lambda Developer Guide, "Using AWS Lambda with Amazon S3": This guide details the event-driven pattern. It states, "Amazon S3 can publish events... when an object is created... You can have S3 invoke a Lambda function when an event occurs." It also recommends a more robust architecture: "To ensure that events are processed successfully, you can configure S3 to send events to an Amazon Simple Queue Service (Amazon SQS) queue."
2. AWS Well-Architected Framework, Cost Optimization Pillar (July 2023): This whitepaper outlines the principle of "Adopt a consumption model" (p. 7). It advises, "Pay only for the computing resources that you require... For example, AWS Lambda is an event-driven compute service that you can use to run code for virtually any type of application or backend service, with zero administration and paying only for what you use." This directly supports choosing Lambda over a provisioned EC2 instance.
3. Amazon S3 User Guide, "Configuring Amazon S3 Event Notifications": This document confirms that S3 can be configured to send event notifications to destinations like an Amazon SQS queue, an Amazon SNS topic, or an AWS Lambda function. This validates the trigger mechanism proposed in the correct answer.
4. AWS Lambda Pricing: The official pricing page confirms the pay-per-use model. It states, "With AWS Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the duration... it takes for your code to execute." This is the core reason for its cost-effectiveness in this scenario.
Options
A:
Create a gateway endpoint for Amazon S3 in the VPC. In the route tables for the private subnets,
add an entry for the gateway endpoint
B:
Create a single NAT gateway in a public subnet. In the route tables for the private subnets, add a
default route that points to the NAT gateway
C:
Create an AWS PrivateLink interface endpoint for Amazon S3 in the VPC. In the route tables for the
private subnets, add an entry for the interface endpoint.
D:
Create one NAT gateway for each Availability Zone in public subnets. In each of the route labels for
the private subnets, add a default route that points lo the NAT gateway in the same Availability Zone
Show Answer
Correct Answer:
Create a gateway endpoint for Amazon S3 in the VPC. In the route tables for the private subnets,
add an entry for the gateway endpoint
Explanation
The most cost-effective and secure method for EC2 instances in private subnets to access Amazon S3 is by using a VPC gateway endpoint. A gateway endpoint for S3 creates a private connection between the VPC and S3, ensuring that traffic does not traverse the public internet. This enhances security for the confidential data. Critically, AWS does not charge for data transfer between EC2 and S3 in the same region, and there are no additional hourly or data processing charges for using a gateway endpoint. This directly addresses the requirement to minimize data transfer costs, making it the optimal solution.
References
1. AWS Documentation, VPC User Guide, "Gateway endpoints for Amazon S3": This document states, "A gateway endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to Amazon S3... There is no additional charge for using gateway endpoints." This confirms that option A is the most cost-effective solution.
Source: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html (See the introductory paragraphs and the "Pricing for gateway endpoints" section).
2. AWS Documentation, "Amazon VPC pricing": This official pricing page details the costs associated with different VPC components. It shows that NAT Gateways and AWS PrivateLink (Interface Endpoints) have both "per hour" and "per GB" data processing charges, while Gateway Endpoints do not have these charges.
Source: https://aws.amazon.com/vpc/pricing/ (See sections "NAT Gateway" and "AWS PrivateLink").
3. AWS Documentation, VPC User Guide, "Compare NAT gateways and NAT instances": This guide explains that a NAT gateway is used to "enable instances in a private subnet to connect to the internet or other AWS services". This implies traffic leaves the VPC, incurring data processing costs, unlike a gateway endpoint which keeps traffic on the AWS private network.
Source: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html (See the "NAT gateway" description).
Options
A:
Configure an Application Load Balancer to distribute traffic properly to the Instances.
B:
Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on
memory utilization
C:
Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on
CPU utilization.
D:
Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before
peak hours.
Show Answer
Correct Answer:
Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before
peak hours.
Explanation
The core issue is slow application performance at the start of predictable, daily peak hours. This indicates that the current scaling mechanism is reactive and cannot provision new instances fast enough to meet the sudden demand. Scheduled scaling is the appropriate solution for predictable traffic patterns. It allows you to proactively increase the number of instances in the Auto Scaling group at a specific time, before the anticipated load increase. This ensures that sufficient capacity is available precisely when peak hours begin, preventing the initial performance degradation.
References
1. AWS Documentation on Scheduled Scaling: "Scheduled scaling allows you to set your own scaling schedule for predictable load changes. For example, let's say that every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling actions based on the predictable traffic patterns of your web application."
Source: AWS Documentation, Scheduled scaling for Amazon EC2 Auto Scaling.
2. AWS Documentation on Dynamic Scaling: "With dynamic scaling, you define how to scale the capacity of your Auto Scaling group in response to changing demand. For example, you have a web application that currently runs on two instances and you want the CPU utilization of the Auto Scaling group to stay at around 50 percent when the load on the application changes." This highlights its reactive nature based on metrics.
Source: AWS Documentation, Dynamic scaling for Amazon EC2 Auto Scaling.
3. AWS Documentation on Application Load Balancer: "An Application Load Balancer functions at the application layer, the seventh layer of the Open Systems Interconnection (OSI) model. After the load balancer receives a request, it evaluates the listener rules in priority order to determine which rule to apply, and then selects a target from the target group for the rule action." This confirms its role is traffic routing, not capacity scaling.
Source: AWS Documentation, What is an Application Load Balancer?.
Options
A:
Configure an Amazon Simple Queue Service (Amazon SOS) FIFO queue. Configure an AWS Lambda
function with an event source mapping for the FIFO queue to process the data.
B:
Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue Use an AWS Batch job to
remove duplicate data from the queue Configure an AWSLambda function to process the data.
C:
Use Amazon Kinesis Data Streams to send the Incoming transaction data to an AWS Batch job that
removes duplicate data. Launch an Amazon EC2 instance that runs a custom script lo process the
data.
D:
Set up an AWS Step Functions state machine to send incoming transaction data to an AWS Lambda
function to remove duplicate data. Launch an Amazon EC2 instance that runs a custom script to
process the data.
Show Answer
Correct Answer:
Configure an Amazon Simple Queue Service (Amazon SOS) FIFO queue. Configure an AWS Lambda
function with an event source mapping for the FIFO queue to process the data.
Explanation
This solution meets all the specified requirements. Amazon SQS FIFO (First-In, First-Out) queues are designed specifically to prevent duplicate messages from being sent by a producer or processed by a consumer. This is achieved through content-based deduplication, directly addressing the "prevent data duplication" requirement. AWS Lambda is a serverless compute service, which fulfills the "does not want to manage infrastructure" requirement. By configuring an event source mapping, the Lambda function is automatically invoked to process messages as they arrive in the SQS FIFO queue, satisfying the need for real-time collection and subsequent processing in a fully managed, event-driven architecture.
References
1. Amazon SQS Developer Guide: "With FIFO queues, you don't have to worry about receiving duplicate messages. FIFO queues prevent duplicates from being sent by a producer or processed by a consumer... The queue makes a best effort to preserve the order of messages, and it delivers each message exactly once."
Source: AWS Documentation, Amazon SQS Developer Guide, Section: "Exactly-once processing".
2. AWS Lambda Developer Guide: "You can use an AWS Lambda function to process messages in an Amazon Simple Queue Service (Amazon SQS) queue... Lambda polls the queue and invokes your Lambda function synchronously with an event that contains queue messages. Lambda reads messages in batches and invokes your function once for each batch."
Source: AWS Documentation, AWS Lambda Developer Guide, Section: "Using AWS Lambda with Amazon SQS".
3. Amazon SQS Developer Guide: "Amazon SQS provides FIFO (First-In-First-Out) queues and standard queues. FIFO queues are designed to enhance messaging between applications when the order of operations and events is critical, or where duplicates can't be tolerated."
Source: AWS Documentation, Amazon SQS Developer Guide, Section: "Amazon SQS message queues".
Options
A:
Configure an AWS WAF web ACL for the Global Accelerator accelerator to block traffic by using
rate-based rules.
B:
Configure an AWS Lambda function to read the ALB metrics to block attacks by updating a VPC
network ACL.
C:
Configure an AWS WAF web ACL on the ALB to block traffic by using rate-based rules.
D:
Configure an Ama7on CloudFront distribution in front of the Global Accelerator accelerator
Show Answer
Correct Answer:
Configure an AWS WAF web ACL for the Global Accelerator accelerator to block traffic by using
rate-based rules.
Explanation
The most effective and straightforward solution is to leverage the native integration between AWS Global Accelerator and AWS WAF. Global Accelerator is the entry point for all traffic from the internet in the described architecture. By associating an AWS WAF web ACL directly with the accelerator, you can inspect and filter traffic at the AWS network edge, before it reaches your application resources. A rate-based rule is a specific WAF feature designed to mitigate application-layer DDoS attacks by automatically blocking source IPs that exceed a defined request threshold, meeting the requirement with the least implementation effort.
References
1. AWS Documentation: AWS WAF with Global Accelerator: "You can use AWS WAF to protect your applications by configuring an AWS WAF web ACL and associating it with your accelerator. A web ACL contains rules that inspect web requests and that specify what to do when a request matches the criteria in a rule: block the request, allow it, or count it." (AWS Global Accelerator Developer Guide, "AWS WAF with Global Accelerator" section).
2. AWS Documentation: Rate-based rule statement: "A rate-based rule tracks the rate of requests for each originating IP address, and triggers the rule action on IPs with rates that go over a limit. You can use this to put a temporary block on requests from an IP address that's sending a flood of requests." (AWS WAF Developer Guide, "Rule statements list", "Rate-based rule statement" section).
3. AWS Documentation: How AWS Shield works: "For protection against application layer attacks, you can use AWS WAF to define rules that provide you with fine-grained control over your web traffic... For higher levels of protection against DDoS attacks, AWS offers AWS Shield Advanced." (AWS Shield Developer Guide, "How AWS Shield works" section). This reference clarifies that WAF is the appropriate tool for application-layer (Layer 7) DDoS mitigation, which is what a website would face.
Options
A:
Deploy a NAT gateway to access the S3 buckets.
B:
Deploy AWS Storage Gateway to access the S3 buckets.
C:
Deploy an S3 interface endpoint to access the S3 buckets.
D:
Deploy an S3 gateway endpoint to access the S3 buckets.
Show Answer
Correct Answer:
Deploy an S3 gateway endpoint to access the S3 buckets.
Explanation
The most cost-effective way to meet the requirement is by using a VPC gateway endpoint for Amazon S3. A gateway endpoint creates a private connection between your VPC and S3, ensuring that traffic does not traverse the public internet. It functions by adding an entry to the route table of the private subnet, directing S3-bound traffic to the endpoint over the AWS private network. Critically, AWS does not charge for data transfer or hourly usage for gateway endpoints, making it the most economical solution that satisfies the security constraint.
References
1. Amazon VPC User Guide, "Gateway endpoints": This document states, "A gateway endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service... We do not charge for gateway endpoints." It explicitly lists Amazon S3 as a supported service.
Source: AWS Documentation, Amazon Virtual Private Cloud User Guide, Section: "VPC endpoints", Subsection: "Gateway endpoints".
2. Amazon VPC Pricing: The official pricing page confirms the cost difference. Under the "VPC Endpoints" section, it states, "There are no additional charges for using gateway endpoints." In contrast, it lists both hourly and data processing charges for "Interface Endpoints".
Source: AWS Documentation, Amazon VPC Pricing, Section: "AWS PrivateLink" (which covers Interface Endpoints) and the note on Gateway Endpoints.
3. Amazon VPC User Guide, "Compare endpoint types": This section provides a direct comparison, highlighting that gateway endpoints are used by specifying the service in a route table, while interface endpoints use an Elastic Network Interface (ENI). This architectural difference underpins the pricing model.
Source: AWS Documentation, Amazon Virtual Private Cloud User Guide, Section: "VPC endpoints", Subsection: "Compare endpoint types".
Options
A:
Update the IAM policies to deny the launch of large EC2 instances. Apply the policies to all users.
B:
Define a resource in AWS Resource Access Manager that prevents the launch of large EC2
instances.
C:
Create an (AM role in each account that denies the launch of large EC2 instances. Grant the
developers IAM group access to the role.
D:
Create an organization in AWS Organizations in the management account with the default policy.
Create a service control policy (SCP) that denies the launch of large EC2 Instances, and apply it to the
AWS accounts.
Show Answer
Correct Answer:
Create an organization in AWS Organizations in the management account with the default policy.
Create a service control policy (SCP) that denies the launch of large EC2 Instances, and apply it to the
AWS accounts.
Explanation
AWS Organizations allows for the central governance and management of multiple AWS accounts. By creating a Service Control Policy (SCP) in the management account, a solutions architect can define permission guardrails that apply to all accounts within the organization or specific Organizational Units (OUs). An SCP can be configured to explicitly deny the ec2:RunInstances action if the requested instance type is on a list of prohibited large types. This single policy, applied from the top down, enforces the rule across all desired accounts with minimal administrative effort, directly meeting the requirement for the least operational overhead.
References
1. AWS Organizations User Guide, "Service control policies (SCPs)": "SCPs are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization." This establishes SCPs as the tool for centralized permission management.
2. AWS Organizations User Guide, "Example service control policies": The section "Prevent Amazon EC2 instances from being launched with unapproved instance types" provides a direct example of an SCP that uses a Deny statement for the ec2:RunInstances action with a condition based on the ec2:InstanceType key. This confirms the exact mechanism described in the correct answer.
3. AWS Identity and Access Management User Guide, "Actions, resources, and condition keys for Amazon EC2": This document lists ec2:InstanceType as a valid condition key that can be used in IAM policies (and by extension, SCPs) to control the ec2:RunInstances action, confirming the technical viability of the policy logic.
4. AWS Resource Access Manager User Guide, "What is AWS Resource Access Manager?": The documentation states, "AWS Resource Access Manager (AWS RAM) helps you securely share your resources across AWS accounts...". This confirms that RAM's purpose is resource sharing, not policy enforcement.
Options
A:
Use General Purpose SSD (gp3) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-
Attach.
B:
Use Throughput Optimized HDD (st1) EBS volumes with Amazon Elastic Block Store (Amazon EBS)
Multi-Attach
C:
Use Provisioned IOPS SSD (io2) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-
Attach.
D:
Use General Purpose SSD (gp2) EBS volumes with Amazon Elastic Block Store (Amazon E8S) Multi-
Attach.
Show Answer
Correct Answer:
Use Provisioned IOPS SSD (io2) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-
Attach.
Explanation
The scenario requires multiple EC2 instances to simultaneously write to a shared block storage volume. This capability is provided by Amazon EBS Multi-Attach. According to official AWS documentation, the Multi-Attach feature is exclusively available for Provisioned IOPS SSD volumes, specifically io1 and io2 types. The feature allows a single volume to be attached to up to 16 Nitro-based instances within the same Availability Zone, which directly aligns with the requirements stated in the question. Therefore, using Provisioned IOPS SSD (io2) volumes with EBS Multi-Attach is the only solution that meets these specific needs for high availability and concurrent write access.
References
1. AWS Documentation. (2023). Amazon EBS User Guide. "Attach a volume to multiple instances with Amazon EBS Multi-Attach". In the "Considerations and limitations" section, it explicitly states, "You can enable Multi-Attach for Provisioned IOPS SSD (io1 and io2) volumes." It also notes the requirement for "Nitro-based instances in the same Availability Zone."
2. AWS Documentation. (2023). Amazon EBS User Guide. "Amazon EBS volume types". The table detailing volume type features confirms that only io1 and io2/io2 Block Express are listed with "Multi-Attach" as a supported feature. The gp2, gp3, and st1 volume types are not listed as supporting Multi-Attach.
Options
A:
Create a second S3 bucket in us-east-1. Use S3 Cross-Region Replication to copy photos from the
existing S3 bucket to the second S3 bucket.
B:
Create a cross-origin resource sharing (CORS) configuration of the existing S3 bucket. Specify us-
east-1 in the CORS rule's AllowedOngm element.
C:
Create a second S3 bucket in us-east-1 across multiple Availability Zones. Create an S3 Lifecycle
rule to save photos into the second S3 bucket,
D:
Create a second S3 bucket In us-east-1. Configure S3 event notifications on object creation and
update events to Invoke an AWS Lambda function to copy photos from the existing S3 bucket to the
second S3 bucket.
Show Answer
Correct Answer:
Create a second S3 bucket in us-east-1. Use S3 Cross-Region Replication to copy photos from the
existing S3 bucket to the second S3 bucket.
Explanation
Amazon S3 Cross-Region Replication (CRR) is a fully managed feature designed to automatically and asynchronously copy objects from a source S3 bucket in one AWS Region to a destination bucket in a different Region. This solution directly addresses the requirement to create a copy of all new photos in a different region. Since CRR is a native, configurable feature of S3, it requires only initial setup and no ongoing code maintenance or infrastructure management, thereby representing the solution with the least operational effort.
References
1. Amazon S3 User Guide, Replicating objects: "Amazon S3 Replication is an elastic, fully managed, low-cost feature that replicates objects between buckets... Cross-Region Replication (CRR) is used to copy objects across Amazon S3 buckets in different AWS Regions." This document establishes CRR as the primary, managed solution for this use case.
2. Amazon S3 User Guide, Using AWS Lambda with Amazon S3: This guide describes how to build event-driven architectures. While it shows the possibility of using Lambda for object manipulation, it is a more hands-on approach compared to the managed replication service. The existence of a dedicated feature (CRR) makes the Lambda option higher in operational effort.
3. Amazon S3 User Guide, Managing your storage lifecycle: "A lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions: Transition actions... [and] Expiration actions..." This confirms Lifecycle rules are for state changes and deletion, not replication.
4. Amazon S3 User Guide, Cross-origin resource sharing (CORS): "Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain." This source clarifies that CORS is for client-side web access control, not server-side data replication.
Options
A:
Transition objects to the S3 Standard storage class 30 days after creation. Write an expiration
action that directs Amazon S3 to delete objects after 90 days.
B:
Transition objects lo the S3 Standard-Infrequent Access (S3 Standard-IA) storage class 30 days after
creation Move all objects to the S3 Glacier FlexibleRetrieval storage class after 90 days. Write an
expiration action that directs Amazon S3 to delete objects after 90 days.
C:
Transition objects to the S3 Glacier Flexible Retrieval storage class 30 days after creation. Write an
expiration action that directs Amazon S3 to delete objects alter 90 days.
D:
Transition objects to the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class 30 days
after creation. Move all objects to the S3 Glacier Flexible Retrieval storage class after 90 days. Write
an expiration action that directs Amazon S3 to delete objects after 90 days.
Show Answer
Correct Answer:
Transition objects to the S3 Glacier Flexible Retrieval storage class 30 days after creation. Write an
expiration action that directs Amazon S3 to delete objects alter 90 days.
Explanation
The requirements are to store logs for 30 days for frequent analysis, retain them for another 60 days for backup, and then delete them. This maps directly to an S3 Lifecycle policy.
1. Days 0-30: Objects are uploaded to S3 Standard by default, which is highly available and designed for frequent access, meeting the initial requirement.
2. Days 31-90: The logs are needed for backup. Transitioning to S3 Glacier Flexible Retrieval after 30 days is the most cost-effective storage option for data that is infrequently accessed and can tolerate retrieval times of minutes to hours, which is typical for backups.
3. Day 90: An expiration action deletes the objects after a total of 90 days.
This lifecycle configuration (Standard for 30 days -> Glacier Flexible Retrieval for 60 days -> Delete) meets all stated requirements at the lowest possible storage cost.
References
1. Amazon S3 User Guide, Storage classes: This document outlines the use cases for different S3 storage classes. S3 Standard is for "frequently accessed data," while S3 Glacier Flexible Retrieval is for "archive data that is accessed 1โ2 times per year and is retrieved asynchronously," which aligns with the "backup purposes" requirement. (See: "Amazon S3 storage classes" section).
2. Amazon S3 User Guide, Managing your storage lifecycle: This guide explains how to create lifecycle policies. It states, "You define the rules for Amazon S3 to apply to a group of objects... The actions that you can define are transition actions and expiration actions." Option C correctly uses a transition action at 30 days and an expiration action at 90 days. (See: "Lifecycle configuration elements" section).
3. Amazon S3 User Guide, Lifecycle transition general considerations: This document details the transition paths. It confirms that transitioning from S3 Standard to S3 Glacier Flexible Retrieval is a valid and supported lifecycle action. (See: "Supported transitions and related constraints" table).
Options
A:
Implement an interface VPC endpoint tor Amazon SOS. Configure the endpoint to use the private
subnets. Add to the endpoint a security group that has aninbound access rule that allows traffic from
the EC2 instances that are in the private subnets.
B:
Implement an interface VPC endpoint tor Amazon SOS. Configure the endpoint to use the public
subnets. Attach to the interface endpoint a VPC endpointpolicy that allows access from the EC2
Instances that are in the private subnets.
C:
Implement an interface VPC endpoint for Ama7on SOS. Configure the endpoint to use the public
subnets Attach an Amazon SOS access policy to the interface VPC endpoint that allows requests from
only a specified VPC endpoint.
D:
Implement a gateway endpoint tor Amazon SOS. Add a NAT gateway to the private subnets. Attach
an IAM role to the EC2 Instances that allows access to the SOS queue.
Show Answer
Correct Answer:
Implement an interface VPC endpoint tor Amazon SOS. Configure the endpoint to use the private
subnets. Add to the endpoint a security group that has aninbound access rule that allows traffic from
the EC2 instances that are in the private subnets.
Explanation
The most secure method for EC2 instances in a private subnet to communicate with Amazon SQS is by using an Interface VPC Endpoint (powered by AWS PrivateLink). This creates an Elastic Network Interface (ENI) with a private IP address directly within the specified private subnets. This ENI acts as a private entry point to the SQS service, ensuring that traffic never leaves the Amazon network. Attaching a security group to this endpoint ENI and allowing inbound traffic only from the EC2 instances' security group provides a robust, network-level security control, fulfilling the requirement for a secure connection.
References
1. Amazon SQS Developer Guide, "Amazon SQS and interface VPC endpoints": This document explicitly states, "To allow your Amazon EC2 instances in your VPC to access Amazon SQS, you can create an interface VPC endpoint... With an interface endpoint, communication between your VPC and Amazon SQS is conducted entirely and securely within the AWS network." It also mentions associating security groups with the endpoint.
Source: AWS Documentation, docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-vpc-endpoints.html
2. AWS PrivateLink Guide, "Interface VPC endpoints": This guide details the functionality of interface endpoints, explaining that they create ENIs in the specified subnets. It clarifies, "For each subnet that you specify... we create an endpoint network interface... You can associate security groups with an endpoint network interface."
Source: AWS Documentation, docs.aws.amazon.com/vpc/latest/privatelink/interface-endpoints.html (See sections "Endpoint network interfaces" and "Security groups").
3. AWS PrivateLink Guide, "Control access to services using VPC endpoints": This document confirms the use of security groups for interface endpoints: "When you create an interface endpoint, you can associate security groups with the endpoint network interface. This security group controls the traffic to the endpoint from resources in your VPC."
Source: AWS Documentation, docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html (See section "Security groups").
Options
A:
Deploy an interface VPC endpoint for Amazon EC2. Create an AWS Site-to-Site VPN connection
between the company and the VPC.
B:
Deploys gateway VPC endpoint for Amazon S3 Set up an AWS Direct Connect connection between
the on-premises network and the VPC.
C:
Set up on AWS Transit Gateway connection from the VPC to the S3 buckets. Create an AWS Site-to-
Site VPN connection between the company and the VPC.
D:
Set up proxy EC2 instances that have routes to NAT gateways. Configure the proxy EC2 instances lo
fetch S3 data and feed the application instances.
Show Answer
Correct Answer:
Deploys gateway VPC endpoint for Amazon S3 Set up an AWS Direct Connect connection between
the on-premises network and the VPC.
Explanation
This solution correctly addresses the two primary requirements. A gateway VPC endpoint for Amazon S3 ensures that traffic from the EC2 instances to the S3 buckets is routed through the AWS private network, not the public internet. This satisfies the compliance mandate. An AWS Direct Connect connection establishes a dedicated, private network link between the on-premises data center and the VPC, providing secure and reliable access for the on-premises servers to consume the application output. This combination creates a fully private and compliant architecture.
References
1. AWS Documentation - VPC User Guide, "Gateway endpoints for Amazon S3": This document states, "A gateway endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service... Traffic between your VPC and the service does not leave the Amazon network." This supports the use of a gateway endpoint for private S3 access.
2. AWS Documentation - Direct Connect User Guide, "What is AWS Direct Connect?": This guide explains, "AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS." This confirms its role in creating a private link from the on-premises data center.
3. AWS Documentation - VPC User Guide, "What is a VPC endpoint?": In the section comparing endpoint types, it clarifies that Gateway Endpoints support Amazon S3 and DynamoDB, while Interface Endpoints use an elastic network interface for private access to other AWS services. This distinguishes the correct endpoint type for this scenario.
4. AWS Documentation - VPC User Guide, "NAT gateways": This source states, "You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet..." This confirms that using a NAT gateway would violate the requirement to avoid the public internet.
Options
A:
Create an AWS Lambda function lo copy the data to an Amazon S3 bucket. Replicate the S3 bucket
(o the secondary Region.
B:
Create a backup of the FSx for ONTAP volumes by using AWS Backup. Copy the volumes to the
secondary Region. Create a new FSx for ONTAP instance from the backup.
C:
Create an FSx for ONTAP instance in the secondary Region. Use NetApp SnapMirror to replicate
data from the primary Region to the secondary Region.
D:
Create an Amazon Elastic File System (Amazon EFS) volume. Migrate the current data to the
volume. Replicate the volume to the secondary Region.
Show Answer
Correct Answer:
Create an FSx for ONTAP instance in the secondary Region. Use NetApp SnapMirror to replicate
data from the primary Region to the secondary Region.
Explanation
The most efficient solution with the least operational overhead is to use the native replication feature of the underlying ONTAP software, which is NetApp SnapMirror. This feature is specifically designed for disaster recovery (DR) scenarios, allowing for efficient, block-level replication of data between FSx for ONTAP file systems, including across different AWS Regions. Once the SnapMirror relationship is established, replication is automated, minimizing ongoing management. The destination volume in the secondary region is kept up-to-date and can be quickly made available for read/write access, preserving all file system protocols (CIFS/NFS) and metadata.
References
1. Amazon FSx for NetApp ONTAP User Guide: "You can use NetApp SnapMirror to replicate data between two Amazon FSx for NetApp ONTAP file systems. SnapMirror is a feature of the ONTAP software that you can use to replicate data at the volume level. A common use case for SnapMirror is for disaster recovery, by replicating data from your primary file system in one AWS Region to a secondary file system in another AWS Region." (AWS Documentation, Working with NetApp SnapMirror, Section: Replicating data with SnapMirror).
2. Amazon FSx for NetApp ONTAP User Guide: "For disaster recovery, you can use NetApp SnapMirror to replicate your data to another FSx for ONTAP file system in any AWS Region." (AWS Documentation, Disaster recovery, Section: Disaster recovery options for FSx for ONTAP).
3. AWS Backup Developer Guide: AWS Backup supports FSx for ONTAP, allowing you to "copy backups to other AWS Regions for disaster recovery." However, this is a backup-and-restore method, distinct from the native replication provided by SnapMirror. (AWS Documentation, Working with Amazon FSx, Section: Amazon FSx backup).
Options
A:
Use Amazon GuardDuty to perform threat detection. Configure Amazon EventBridge to filter for
GuardDuty findings and to Invoke an AWS Lambda function to adjust the AWS WAF rules.
B:
Use AWS Firewall Manager to perform threat detection. Configure Amazon EventBridge to filter
for Firewall Manager findings and to invoke an AWS Lambda function to adjust the AWS WAF web
ACL
C:
Use Amazon Inspector to perform threat detection and lo update the AWS WAF rules. Create a
VPC network ACL to limit access to the web application.
D:
Use Amazon Macie to perform threat detection and to update the AWS WAF rules. Create a VPC
network ACL to limit access to the web application.
Show Answer
Correct Answer:
Use Amazon GuardDuty to perform threat detection. Configure Amazon EventBridge to filter for
GuardDuty findings and to Invoke an AWS Lambda function to adjust the AWS WAF rules.
Explanation
The requirement is to automatically detect and respond to suspicious behavior. Amazon GuardDuty is the appropriate AWS service for intelligent threat detection, as it continuously monitors AWS accounts and workloads by analyzing data sources like VPC Flow Logs and AWS CloudTrail logs. When GuardDuty identifies a potential threat, it generates a finding. These findings can be sent to Amazon EventBridge, which can then filter for specific findings (e.g., a malicious IP address scanning ports). An EventBridge rule can trigger an AWS Lambda function, which contains the logic to automatically update an AWS WAF IP set, effectively blocking the malicious IP address. This creates a complete, automated detection and response workflow.
References
1. Amazon GuardDuty Documentation: "Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation."
Source: AWS Documentation, What Is Amazon GuardDuty?, Introduction.
2. AWS Security Blog: A common security pattern is described: "This solution uses Amazon GuardDuty to detect suspicious activity... Amazon CloudWatch Events [now EventBridge] to detect the GuardDuty finding, and an AWS Lambda function to perform the remediation by updating the AWS WAF IP block list."
Source: AWS Security Blog, How to use Amazon GuardDuty and AWS Web Application Firewall to automatically block suspicious hosts, Solution overview section.
3. Amazon EventBridge Documentation: "An event bus receives events from a source and routes them to targets based on rules. A rule matches incoming events and sends them to targets for processing... When an event matches a rule, EventBridge sends the event to the specified targets. For example, you can create a rule that invokes a Lambda function..."
Source: AWS Documentation, Amazon EventBridge User Guide, "Amazon EventBridge event buses".
4. AWS Well-Architected Framework - Security Pillar: This framework emphasizes implementing detective controls and automating responses. "Implement detective controls to identify potential security threats or incidents... Automate response to events to reduce the time to react." The architecture in option A directly implements this principle.
Source: AWS Well-Architected Framework, Security Pillar, "SEC 07: How do you detect and investigate security events?", and "SEC 08: How do you protect your compute resources?".
Options
A:
Use Amazon Kinesis Data Firehose to Ingest the data.
B:
Use AWS Lambda with AWS Step Functions to process the data.
C:
Use AWS Database Migration Service (AWS DMS) to ingest the data
D:
Use Amazon EC2 instances in an Auto Seating group to process the data.
E:
Use AWS Fargate with Amazon Elastic Container Service (Amazon ECS) to process the data.
Show Answer
Correct Answer:
Use Amazon Kinesis Data Firehose to Ingest the data., Use AWS Lambda with AWS Step Functions to process the data.
Explanation
The goal is to create a scalable, serverless solution to improve the performance of a near-real-time streaming application with a 30-minute processing job.
Amazon Kinesis Data Firehose is a fully managed, serverless service designed for scalable, near-real-time ingestion of streaming data, making it an ideal choice for the ingestion part of the solution.
For processing, a single AWS Lambda function cannot be used as its maximum execution timeout is 15 minutes. However, AWS Step Functions is a serverless orchestrator that can manage long-running workflows. It can break the 30-minute job into smaller, parallelizable tasks executed by multiple Lambda functions. This serverless pattern directly addresses the performance bottleneck by parallelizing the workload and meets the long-running requirement.
References
1. Amazon Kinesis Data Firehose: "Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics services. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration."
Source: AWS Documentation, Amazon Kinesis Data Firehose Developer Guide, "What Is Amazon Kinesis Data Firehose?".
2. AWS Lambda Limits: "Timeout - The amount of time that Lambda allows a function to run before stopping it. The default is 3 seconds. The maximum value is 900 seconds (15 minutes)."
Source: AWS Documentation, AWS Lambda Developer Guide, "Quotas", Lambda function configuration quotas table.
3. AWS Step Functions for Long-Running Workflows: "AWS Step Functions is a serverless orchestration service that lets you combine AWS Lambda functions and other AWS services to build business-critical applications... You can create long-running, automated workflows for applications that require human interaction, or workflows that can last for up to a year."
Source: AWS Documentation, AWS Step Functions Developer Guide, "What is AWS Step Functions?".
4. AWS Database Migration Service (DMS) Purpose: "AWS Database Migration Service (AWS DMS) is a managed migration and replication service that helps you move your database and analytics workloads to AWS quickly, securely, and with minimal downtime..."
Source: AWS Documentation, AWS DMS User Guide, "What is AWS Database Migration Service?".
Options
A:
Use a CloudFront security policy lo create a certificate.
B:
Use a CloudFront origin access control (OAC) to create a certificate.
C:
Use AWS Certificate Manager (ACM) to create a certificate. Use DNS validation for the domain.
D:
Use AWS Certificate Manager (ACM) to create a certificate. Use email validation for the domain.
Show Answer
Correct Answer:
Use AWS Certificate Manager (ACM) to create a certificate. Use DNS validation for the domain.
Explanation
AWS Certificate Manager (ACM) is the designated service for provisioning, managing, and deploying public SSL/TLS certificates for use with AWS services, including Amazon CloudFront. ACM provides managed renewal, which automatically renews certificates before they expire, meeting the automation requirement.
For domain validation, DNS validation is more operationally efficient than email validation. When using Amazon Route 53 as the DNS provider, ACM can automatically add the required CNAME record to the DNS zone to validate domain ownership. This process, along with managed renewal, creates a fully automated, end-to-end lifecycle for the certificate with no manual intervention required, thus offering the highest operational efficiency.
References
1. AWS Certificate Manager User Guide, "Domain validation": This document compares DNS and email validation. It states, "We recommend that you use DNS validation instead of email validation. DNS validation has two main advantages over email validation...ACM can renew certificates automatically that you validated by using DNS...If you use Route 53 to manage your public DNS records, you can allow ACM to write the CNAME for you."
2. AWS Certificate Manager User Guide, "Managed renewal for ACM certificates": This guide explains the automated renewal process. For DNS-validated certificates, "ACM renews the certificate automatically as long as the certificate is in use and your CNAME record remains in place in your DNS configuration." This confirms the automation and efficiency of the DNS method.
3. AWS CloudFront Developer Guide, "Requirements for using SSL/TLS certificates with CloudFront": This document specifies the use of ACM for custom domains. It states, "To use an SSL/TLS certificate from AWS Certificate Manager (ACM), you must request or import the certificate in the US East (N. Virginia) Region (us-east-1)." This confirms ACM is the correct service to integrate with CloudFront.
4. AWS CloudFront Developer Guide, "Security policies": This section details the function of security policies: "A security policy determines...the SSL/TLS protocol that CloudFront uses to encrypt the content that it returns to viewers...[and] the ciphers that CloudFront uses." This confirms it is unrelated to certificate creation.
Options
A:
Provision the AWS accounts by using AWS Control Tower. Use account drift notifications to Identify
the changes to the OU hierarchy.
B:
Provision the AWS accounts by using AWS Control Tower. Use AWS Config aggregated rules to
identify the changes to the OU hierarchy.
C:
Use AWS Service Catalog to create accounts in Organizations. Use an AWS CloudTrail organization
trail to identify the changes to the OU hierarchy.
D:
Use AWS CloudFormation templates to create accounts in Organizations. Use the drift detection
operation on a stack to identify the changes to the OUhierarchy.
Show Answer
Correct Answer:
Provision the AWS accounts by using AWS Control Tower. Use account drift notifications to Identify
the changes to the OU hierarchy.
Explanation
AWS Control Tower is a managed service designed to set up and govern a secure, multi-account AWS environment based on best practices. It establishes a "landing zone" which includes a predefined Organizational Unit (OU) structure. Control Tower has a built-in drift detection capability that continuously monitors the landing zone for any changes that deviate from its governance policies. This includes changes to the OU hierarchy. When drift is detected, Control Tower can automatically send notifications via Amazon EventBridge and Amazon SNS to the operations team. This provides a fully managed solution that directly meets both requirements with the least possible operational overhead.
References
1. AWS Control Tower Documentation - Detect and resolve drift in AWS Control Tower: This document states, "Drift is a state in which the resources in your landing zone are not in conformance with the governance policies that AWS Control Tower has established for them... AWS Control Tower scans your landing zone OUs and accounts to detect drift." It also explains that notifications for drift events are sent to Amazon EventBridge. (Source: AWS Control Tower User Guide, "Detect and resolve drift in AWS Control Tower").
2. AWS Control Tower Documentation - How AWS Control Tower works: This guide explains that Control Tower creates a foundational OU structure as part of the landing zone setup. Any unauthorized modifications to this structure are considered drift. (Source: AWS Control Tower User Guide, "How AWS Control Tower works", Section: "Landing zone").
3. AWS Config Developer Guide - Supported AWS Resource Types: The official list of resource types supported by AWS Config does not include AWS::Organizations::OrganizationalUnit. This confirms that AWS Config is not the appropriate tool for directly monitoring the OU structure itself. (Source: AWS Config Developer Guide, Appendix: "Supported Resource Types").
Options
A:
Update the EC2 user data in the Auto Scaling group lifecycle policy to copy the website assets from
the EC2 instance that was launched most recently. Configure the ALB to make changes to the website
assets only in the newest EC2 instance.
B:
Copy the website assets to an Amazon Elastic File System (Amazon EFS) file system. Configure each
EC2 instance to mount the EFS file system locally.Configure the website hosting application to
reference the website assets that are stored in the EFS file system.
C:
Copy the website assets to an Amazon S3 bucket. Ensure that each EC2 Instance downloads the
website assets from the S3 bucket to the attached AmazonElastic Block Store (Amazon EBS) volume.
Run the S3 sync command once each hour to keep files up to date.
D:
Restore an Amazon Elastic Block Store (Amazon EBS) snapshot with the website assets. Attach the
EBS snapshot as a secondary EBS volume when a new CC2 instance is launched. Configure the
website hosting application to reference the website assets that are stored in the secondary EDS
volume.
Show Answer
Correct Answer:
Copy the website assets to an Amazon Elastic File System (Amazon EFS) file system. Configure each
EC2 instance to mount the EFS file system locally.Configure the website hosting application to
reference the website assets that are stored in the EFS file system.
Explanation
Amazon Elastic File System (Amazon EFS) is a fully managed, scalable, and elastic file storage service for use with AWS Cloud services and on-premises resources. It is designed to be mounted by multiple Amazon EC2 instances concurrently, even across different Availability Zones. By storing the CMS assets on an EFS file system and mounting it on each EC2 instance, any update made from one instance is immediately and consistently available to all other instances. This provides a centralized, low-latency solution for sharing up-to-date content, directly meeting the core requirement of the scenario.
References
1. Amazon EFS User Guide: "Amazon EFS is a file storage service... It's built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, so you don't need to manage storage. It's designed to provide massively parallel shared access for thousands of Amazon EC2 instances..."
Source: AWS Documentation, Amazon EFS User Guide, "What is Amazon Elastic File System?", Section: "How Amazon EFS works".
2. AWS Whitepaper - "Storage for Your Web Applications": This paper explicitly discusses the use case of EFS for content management systems. "For web serving and content management, you need a durable, highly available storage solution that can be shared across a fleet of web servers... Amazon EFS is a file storage service for Amazon EC2 instances. Amazon EFS is easy to use and provides a simple interface that allows you to create and configure file systems quickly and easily."
Source: AWS Whitepapers & Guides, Storage for Your Web Applications, Page 4, Section: "Web Serving and Content Management".
3. Amazon EBS User Guide: "An EBS volume is an off-instance storage device that can be attached to your instances. After you attach a volume to an instance, you can use it as you would use a physical hard drive... An Amazon EBS volume can only be attached to a single instance at a time."
Source: AWS Documentation, Amazon EBS User Guide, "Amazon EBS volumes", Section: "Volume attachment and detachment".
Options
A:
Purchase Partial Upfront Reserved Instances tor a 3-year term.
B:
Purchase a No Upfront Compute Savings Plan for a 1-year term.
C:
Purchase All Upfront Reserved Instances for a 1 -year term.
D:
Purchase an All Upfront EC2 Instance Savings Plan for a 1-year term.
Show Answer
Correct Answer:
Purchase a No Upfront Compute Savings Plan for a 1-year term.
Explanation
The core requirement is to optimize costs while maintaining the flexibility to change EC2 instance families and types every few months. A Compute Savings Plan is the ideal solution as it provides the most flexibility. It automatically applies discounts to any EC2 instance usage globally, regardless of the instance family, size, operating system, tenancy, or AWS Region. This directly supports the company's need to frequently alter its instance configurations without losing the cost-saving benefits. A 1-year term offers a significant discount with a moderate commitment period.
References
1. AWS Savings Plans User Guide, "What are Savings Plans?": This document states, "Compute Savings Plans provide the most flexibility and help to reduce your costs by up to 66%... These plans automatically apply to EC2 instance usage regardless of instance family, size, AZ, Region, OS or tenancy." This directly supports the choice of a Compute Savings Plan for maximum flexibility.
2. AWS Savings Plans User Guide, "Choosing between Savings Plans and RIs": The guide explains, "If you want the flexibility to change instance families, Regions, operating systems, or tenancies, you should purchase a Compute Savings Plan." This confirms that for the scenario described, a Compute Savings Plan is superior to RIs.
3. AWS Savings Plans User Guide, "EC2 Instance Savings Plans": This section clarifies the limitation of this plan type: "EC2 Instance Savings Plans... provide the lowest prices, offering savings up to 72%... in exchange for commitment to usage of individual instance families in a chosen Region (for example, M5 usage in N. Virginia)." This highlights why it is the incorrect choice when family changes are required.
Options
A:
Use the AWS Schema Conversion Tool <AWS SCT) to rewrite the SOL queries in the applications.
B:
Enable Babelfish on Aurora PostgreSQL to run the SQL queues from the applications.
C:
Migrate the database schema and data by using the AWS Schema Conversion Tool (AWS SCT) and
AWS Database Migration Service (AWS DMS).
D:
Use Amazon RDS Proxy to connect the applications to Aurora PostgreSQL
E:
Use AWS Database Migration Service (AWS DMS) to rewrite the SOI queries in the applications
Show Answer
Correct Answer:
Enable Babelfish on Aurora PostgreSQL to run the SQL queues from the applications., Migrate the database schema and data by using the AWS Schema Conversion Tool (AWS SCT) and
AWS Database Migration Service (AWS DMS).
Explanation
This scenario requires a two-part solution: migrating the database itself and ensuring the existing application can communicate with the new database engine with minimal changes.
Babelfish for Aurora PostgreSQL is a capability that enables an Aurora PostgreSQL cluster to understand the Tabular Data Stream (TDS) protocol and T-SQL, the dialect used by Microsoft SQL Server. This allows the application, originally built for SQL Server, to connect to the new Aurora database with few to no code changes.
For the migration of the database schema and data, the standard AWS toolset for heterogeneous migrations (from one database engine to another) is the AWS Schema Conversion Tool (SCT) and AWS Database Migration Service (DMS). SCT converts the source schema and code objects, and DMS performs the data migration.
References
1. Babelfish for Aurora PostgreSQL: AWS Documentation, Working with Babelfish for Aurora PostgreSQL. "Babelfish for Aurora PostgreSQL is a capability of Amazon Aurora PostgreSQL-Compatible Edition that enables your Aurora cluster to understand database requests from applications written for Microsoft SQL Server... With Babelfish, applications that were originally written for SQL Server can work with Aurora with fewer code changes."
Source: AWS Documentation, Amazon Aurora User Guide, "Working with Babelfish for Aurora PostgreSQL".
2. AWS Schema Conversion Tool (SCT): AWS Documentation, What is the AWS Schema Conversion Tool?. "Use the AWS Schema Conversion Tool (AWS SCT) to convert your existing database schema from one database engine to another... You can convert relational OLTP schema or a data warehouse schema."
Source: AWS Documentation, AWS Schema Conversion Tool User Guide, "What is the AWS Schema Conversion Tool?".
3. AWS Database Migration Service (DMS): AWS Documentation, What is AWS Database Migration Service?. "AWS Database Migration Service (AWS DMS) is a web service that you can use to migrate data from a source data store to a target data store... For heterogeneous migrations, the source and target databases are of different types, such as an Oracle database to an Amazon Aurora database."
Source: AWS Documentation, AWS Database Migration Service User Guide, "What is AWS Database Migration Service?".
Options
A:
Write an AWS Lambda function to create an RDS snapshot every day.
B:
Modify the RDS database lo have a retention period of 30 days for automated backups.
C:
Use AWS Systems Manager Maintenance Windows to modify the RDS backup retention period.
D:
Create a manual snapshot every day by using the AWS CLI. Modify the RDS backup retention
period.
Show Answer
Correct Answer:
Modify the RDS database lo have a retention period of 30 days for automated backups.
Explanation
Amazon RDS provides a built-in automated backup feature that takes a daily snapshot of the database instance during a configurable backup window. The default retention period for these backups is 7 days. To meet the requirements, the most efficient solution is to modify the DB instance's configuration to change the backup retention period from the default to 30 days. This is a one-time setting that leverages the existing, managed RDS functionality, thus incurring the least operational overhead. No scripting, manual intervention, or additional services are required.
References
1. Amazon RDS User Guide - Backing up and restoring an Amazon RDS DB instance: "Amazon RDS creates and saves automated backups of your DB instance... during the backup window... You can set the backup retention period to a value from 0 to 35 days... To modify the backup retention period, you can use the AWS Management Console, the AWS CLI, or the RDS API." This confirms that modifying the retention period is the standard, built-in method.
2. Amazon RDS User Guide - Modifying an Amazon RDS DB instance: This section details the process of changing DB instance settings, including the BackupRetentionPeriod. The procedure is a simple modification of the instance properties, highlighting its low operational overhead. (See the "Modifying a DB instance" section in the console or the modify-db-instance command in the CLI documentation).
Options
A:
Deploy an EC2 instance with enhanced networking as a shared NFS storage system. Export the NFS
share. Mount the NFS share on the EC2 instances in theAuto Scaling group.
B:
Create an Amazon S3 bucket that uses the S3 Standard-Infrequent Access (S3 Standard-IA) storage
class Mount the S3 bucket on the EC2 instances in theAuto Scaling group.
C:
Deploy an SFTP server endpoint by using AWS Transfer for SFTP and an Amazon S3 bucket.
Configure the EC2 instances in the Auto Scaling group toconnect to the SFTP server.
D:
Create an Amazon.. System (Amazon fcFS) file system with mount points in multiple Availability
Zones. Use the bFS Stondard-intrcqucnt Access (Standard-IA) storage class. Mount the NFS share on
the EC2 instances in the Auto Scaling group.
Show Answer
Correct Answer:
Create an Amazon.. System (Amazon fcFS) file system with mount points in multiple Availability
Zones. Use the bFS Stondard-intrcqucnt Access (Standard-IA) storage class. Mount the NFS share on
the EC2 instances in the Auto Scaling group.
Explanation
The core requirements are a highly available, scalable, shared file system for a Linux application that cannot be modified, with a cost-effective storage tier for infrequently accessed data. Amazon EFS is a fully managed, scalable, and highly available NFS file system. It can be mounted concurrently by EC2 instances across multiple Availability Zones, satisfying the high availability and shared access needs. The application can mount the EFS volume using the standard NFS protocol without any code changes. By enabling EFS Lifecycle Management, files that are not accessed frequently can be automatically moved to the EFS Standard-Infrequent Access (EFS IA) storage class, which significantly reduces storage costs and meets the "MOST cost-effectively" requirement for the described access pattern.
References
1. Amazon EFS Features: "Amazon EFS is built to scale on demand to petabytes without disrupting applications... It supports the Network File System version 4 (NFSv4.1 and NFSv4.0) protocol... With Amazon EFS, you can mount your file systems on your on-premises datacenter servers when connected to your Amazon VPC with AWS Direct Connect or AWS VPN."
Source: AWS Documentation, "What is Amazon Elastic File System?", Features section.
2. EFS High Availability and Durability: "Amazon EFS is designed to be highly available and durable. Amazon EFS Standard and One Zone storage classes are designed for 99.99% (4 nines) of availability... EFS Standard redundantly stores data and metadata across multiple geographically separated Availability Zones (AZs) within a Region."
Source: AWS Documentation, "Amazon EFS: How it works", Availability and durability section.
3. EFS Storage Classes and Cost Optimization: "Amazon EFS offers two storage classes: Amazon EFS Standard and Amazon EFS Infrequent Access (EFS IA)... EFS IA provides price/performance that is cost-optimized for files that are not accessed every day... You can use EFS Lifecycle Management to automatically move files from EFS Standard to EFS IA."
Source: AWS Documentation, "Amazon EFS storage classes".
Options
A:
Set up an Amazon CloudWatch alarm to monitor database utilization. Scale up or scale down the
database capacity based on the amount of traffic.
B:
Migrate the database to Amazon EC2 instances in on Auto Scaling group. Increase or decrease the
number of instances based on the amount of traffic.
C:
Migrate the database to an Amazon Aurora Serverless DB cluster to scale up or scale down the
capacity based on the amount of traffic.
D:
Schedule an AWS Lambda function to provision the required database capacity at the start of each
day. Schedule another Lambda function to reduce the capacity at the end of each day.
Show Answer
Correct Answer:
Migrate the database to an Amazon Aurora Serverless DB cluster to scale up or scale down the
capacity based on the amount of traffic.
Explanation
Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora. It is specifically designed for applications with infrequent, intermittent, or unpredictable workloads. Aurora Serverless automatically starts up, shuts down, and scales compute capacity based on the application's needs, eliminating the need to provision for peak capacity. This directly addresses the problem of wasted capacity during non-peak hours. Migrating to Aurora Serverless is the most efficient solution as AWS manages the scaling automatically, thus meeting the requirement for the least operational effort.
References
1. Amazon Aurora User Guide, "Using Amazon Aurora Serverless v2": "Aurora Serverless v2 is an on-demand, automatic scaling configuration for Amazon Aurora... It's suitable for a broad set of applications. For example, it can be a good choice for workloads that have infrequent or intermittent activity and also for workloads that have regular cycles of high and low usage."
2. Amazon Aurora User Guide, "How Aurora Serverless v2 works": "With Aurora Serverless v2, you don't have to provision, scale, and manage any database servers. Instead, you create an Aurora Serverless v2 DB cluster... The database capacity automatically scales up and down based on your application's needs."
3. AWS Documentation, "Amazon Aurora FAQs", Section: Amazon Aurora Serverless: "Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora... It enables you to run your database in the cloud without managing any database capacity. You can specify the desired database capacity range, and Aurora Serverless automatically scales to meet your applicationโs needs."
Options
A:
Use AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS)
standard queues as the event source. Use AWS KeyManagement Service (SSE-KMS) for encryption.
Add the kms:Decrypt permission for the Lambda execution role.
B:
Use AWS Lambda event source mapping. Use Amazon Simple Queue Service (Amazon SQS) FIFO
queues as the event source. Use SQS managed encryption keys (SSE-SQS) for encryption. Add the
encryption key invocation permission for the Lambda function.
C:
Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS)
FIFO queues as the event source. Use AWS KMS keys (SSE-KMS). Add the kms:Decrypt permission for
the Lambda execution role.
D:
Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS)
standard queues as the event source. Use AWS KMS keys (SSE-KMS) for encryption. Add the
encryption key invocation permission for the Lambda function.
Show Answer
Correct Answer:
Use AWS Lambda event source mapping. Use Amazon Simple Queue Service (Amazon SQS) FIFO
queues as the event source. Use SQS managed encryption keys (SSE-SQS) for encryption. Add the
encryption key invocation permission for the Lambda function.
Explanation
All four options satisfy Lambdaโs native at-least-once execution of an SQS event source.
For encryption, SQS offers two choices:
โข SSE-SQS (AWS-owned keys) โ zero additional cost.
โข SSE-KMS (customer-managed keys) โ each Encrypt/Decrypt call incurs KMS API charges.
Using SSE-KMS on 1 million messages costs 100 ร $0.03 = $3 in KMS fees.
The FIFO queue surcharge over a Standard queue is only $0.10 per million requests ($0.50 vs $0.40).
Therefore โFIFO + SSE-SQSโ (Option B) is markedly less expensive than any variant that uses SSE-KMS, while still providing at-least-once delivery and strong, compliant encryption.
Thus Option B meets the security and durability requirements at the lowest total cost.
References
1. Amazon SQS Pricing โ โStandard Queue: $0.40 per 1 M requests; FIFO Queue: $0.50 per 1 M requests.โ (https://aws.amazon.com/sqs/pricing, section โPricingโ)
2. AWS Key Management Service Pricing โ โ$0.03 per 10 000 requestsโ for Encrypt/Decrypt. (https://aws.amazon.com/kms/pricing, โRequest Chargesโ)
3. Amazon SQS Developer Guide โ โSSE-SQS uses an AWS-owned key at no additional cost; SSE-KMS uses a customer master key and incurs KMS charges.โ (Server-Side Encryption, 2023-10-25, para 4)
4. Amazon SQS Developer Guide โ โStandard queues provide at-least-once deliveryโฆ FIFO queues provide exactly-once processing.โ (Introduction, โQueue Typesโ, para 2โ3)
5. AWS Lambda Developer Guide โ โWhen you configure an SQS queue as an event source, Lambda processes messages at least once.โ (Using AWS Lambda with Amazon SQS, 2023-09-14, para 1)
Options
A:
Create a new AWS Cost and Usage Report. Search the report for cost recommendations for the
EC2 instances, the Auto Scaling group, and the EBS volumes.
B:
Create new Amazon CloudWatch billing alerts. Check the alert statuses for cost recommendations
for the EC2 instances, the Auto Scaling group, and the EBS volumes.
C:
Configure AWS Compute Optimizer for cost recommendations for the EC2 instances, the Auto
Scaling group, and the EBS volumes.
D:
Configure AWS Compute Optimizer for cost recommendations for the EC2 instances. Create a new
AWS Cost and Usage Report. Search the report for cost recommendations for the Auto Scaling group
and the EBS volumes.
Show Answer
Correct Answer:
Configure AWS Compute Optimizer for cost recommendations for the EC2 instances, the Auto
Scaling group, and the EBS volumes.
Explanation
AWS Compute Optimizer is a service specifically designed to analyze the configuration and utilization metrics of AWS resources to provide cost-saving recommendations. It uses machine learning to analyze historical data from Amazon CloudWatch and provides actionable recommendations for Amazon EC2 instances, Auto Scaling groups, and Amazon EBS volumes. This single service directly addresses all the requirements of the questionโidentifying cost optimizations across all three specified resource typesโwith the highest operational efficiency by automating the analysis and recommendation process.
References
1. AWS Compute Optimizer User Guide, "What is AWS Compute Optimizer?": "AWS Compute Optimizer is a service that analyzes the configuration and utilization metrics of your AWS resources. It reports whether your resources are optimal, and generates optimization recommendations to reduce the cost and improve the performance of your workloads."
2. AWS Compute Optimizer User Guide, "Supported resources and requirements": This section explicitly lists "EC2 instances," "Auto Scaling groups," and "Amazon EBS volumes" as supported resource types for which Compute Optimizer provides recommendations.
3. AWS Cost and Usage Reports User Guide, "What are AWS Cost and Usage Reports?": "The AWS Cost and Usage Reports (AWS CUR) contains the most comprehensive set of AWS cost and usage data available... The report lists AWS usage for each service category... in hourly, daily, or monthly line items." This highlights that CUR is a data source, not a recommendation engine.
4. Amazon CloudWatch User Guide, "Using Amazon CloudWatch alarms," "Creating a billing alarm to monitor your estimated AWS charges": This documentation explains that billing alarms are used to "send you an email message when the estimated charges on your AWS bill exceed a certain level (threshold) that you define." This confirms their purpose is budget monitoring, not resource optimization.
Options
A:
Create an Amazon S3 bucket to store the data. Configure the application to scan for new data in
the bucket for processing.
B:
Create an Amazon API Gateway endpoint to handle transmitted location coordinates. Use an AWS
Lambda function to process each item concurrently.
C:
Create an Amazon Simple Queue Service (Amazon SOS) queue to store the incoming data.
Configure the application to poll for new messages for processing.
D:
Create an Amazon DynamoDB table to store transmitted location coordinates. Configure the
application to query the table for new data for processing. Use TTL to remove data that has been
processed.
Show Answer
Correct Answer:
Create an Amazon Simple Queue Service (Amazon SOS) queue to store the incoming data.
Configure the application to poll for new messages for processing.
Explanation
The core problem is the tight coupling between the data producers (GPS trackers) and the consumer (EC2 web application). During a traffic spike, the consumer is overwhelmed, leading to data loss. The best solution is to introduce a durable, scalable buffer between the two components.
Amazon Simple Queue Service (SQS) is a fully managed message queuing service designed for this exact purpose. By placing an SQS queue in front of the EC2 application, incoming data from the trackers is stored durably in the queue. The EC2 instances can then pull messages from the queue and process them at a sustainable rate, preventing them from being overwhelmed. This decouples the components, prevents data loss, and as a fully managed service, it has minimal operational overhead.
References
1. AWS Documentation - Amazon SQS Developer Guide: "Amazon SQS offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components... SQS provides a generic web services API that you can access using any programming language that the AWS SDK supports."
Source: AWS Documentation, What is Amazon SQS?, Retrieved from https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html
2. AWS Well-Architected Framework - Reliability Pillar: The framework recommends decoupling components to improve reliability. "A common pattern for this is to use a queue. The service that creates the work puts a message on a queue. A separate service can then read the message from the queue, do the work, and then delete the message." This directly describes the solution in option C.
Source: AWS Well-Architected Framework, Reliability Pillar, Page 26, "Decouple components". Retrieved from https://d1.awsstatic.com/whitepapers/architecture/AWSWell-ArchitectedFramework.pdf
3. AWS Documentation - Decoupling applications for scalability and resilience: "By decoupling your application's components, you can build more resilient and scalable applications... Amazon SQS provides a message queue that can be used to buffer requests and decouple different components of your application."
Source: AWS Prescriptive Guidance, Decoupling applications for scalability and resilience with Amazon SQS and Amazon SNS. Retrieved from https://aws.amazon.com/prescriptive-guidance/patterns/decouple-applications-for-scalability-and-resilience-with-amazon-sqs-and-amazon-sns/