AWS SAA-C03 Exam Questions - Solutions Architect Questions 2025

Updated:

Get authentic, updated questions for the AWS Certified Solutions Architect – Associate (SAA-C03) exam, all reviewed by certified AWS cloud experts. Each question includes accurate answers with detailed explanations and references, plus full access to our interactive exam simulator. Try the free sample and see why IT professionals rely on Cert Empire for confident, first-time success.

Exam Questions

Question 1

A company runs a stateful production application on Amazon EC2 instances The application requires at least two EC2 instances to always be running. A solutions architect needs to design a highly available and fault-tolerant architecture for the application. The solutions architect creates an Auto Scaling group of EC2 instances. Which set of additional steps should the solutions architect take to meet these requirements?
Options
A: Set the Auto Scaling group's minimum capacity to two. Deploy one On-Demand Instance in one Availability Zone and one On-Demand Instance in a second Availability Zone.
B: Set the Auto Scaling group's minimum capacity to four Deploy two On-Demand Instances in one Availability Zone and two On-Demand Instances in a second Availability Zone
C: Set the Auto Scaling group's minimum capacity to two. Deploy four Spot Instances in one Availability Zone.
D: Set the Auto Scaling group's minimum capacity to four Deploy two On-Demand Instances in one Availability Zone and two Spot Instances in a second Availability Zone.
Show Answer
Correct Answer:
Set the Auto Scaling group's minimum capacity to two. Deploy one On-Demand Instance in one Availability Zone and one On-Demand Instance in a second Availability Zone.
Explanation
The primary requirements are to maintain a minimum of two running instances and to ensure high availability and fault tolerance. Setting the Auto Scaling group's minimum capacity to two directly satisfies the instance count requirement. To achieve high availability, the architecture must withstand the failure of a single component, such as an Availability Zone (AZ). By configuring the Auto Scaling group to launch instances across two separate AZs, the application is protected from an AZ-level outage. If one AZ fails, the instance in the other AZ continues to serve traffic, and the Auto Scaling group will automatically launch a replacement instance in a healthy AZ to meet the minimum of two.
References

1. AWS Auto Scaling User Guide, Section: "Distribute instances across Availability Zones". The documentation states, "By launching your instances in separate Availability Zones, you can protect your applications from the failure of a single location... When one Availability Zone becomes unhealthy or unavailable, Amazon EC2 Auto Scaling launches new instances in an unaffected Availability Zone."

2. AWS Auto Scaling User Guide, Section: "Set scaling limits for your Auto Scaling group". This section explains the function of minimum capacity: "The minimum capacity is the minimum number of instances that you want in your Auto Scaling group." This directly supports setting the minimum to two to meet the requirement.

3. AWS Well-Architected Framework - Reliability Pillar (July 2023), Page 23, Section: "Deploy the workload to multiple locations". The framework advises, "For a regional service, you can increase availability by deploying the workload to multiple AZs within an AWS Region... If one AZ fails, the workload in other AZs can continue to operate."

4. Amazon EC2 User Guide, Section: "Instance purchasing options". This guide describes On-Demand Instances as suitable for "applications with short-term, spiky, or unpredictable workloads that cannot be interrupted," which aligns with the needs of a production application, unlike Spot Instances.

Question 2

A company manages a data lake in an Amazon S3 bucket that numerous applications access The S3 bucket contains a unique prefix for each application The company wants to restrict each application to its specific prefix and to have granular control of the objects under each prefix. Which solution will meet these requirements with the LEAST operational overhead?
Options
A: Create dedicated S3 access points and access point policies for each application.
B: Create an S3 Batch Operations job to set the ACL permissions for each object in the S3 bucket
C: Replicate the objects in the S3 bucket to new S3 buckets for each application. Create replication rules by prefix
D: Replicate the objects in the S3 bucket to new S3 buckets for each application Create dedicated S3 access points for each application
Show Answer
Correct Answer:
Create dedicated S3 access points and access point policies for each application.
Explanation
Amazon S3 Access Points are the ideal solution for this scenario. They are unique hostnames that you can create to enforce distinct permissions for any request made to a shared S3 bucket. By creating a dedicated access point for each application, you can attach a specific access point policy that restricts access to that application's unique prefix. This approach provides granular, prefix-level control within a single bucket, directly meeting the requirements. It significantly simplifies permissions management compared to a single, complex bucket policy and avoids the high cost and management complexity of data duplication, thus having the least operational overhead.
References

1. Amazon S3 User Guide, "Managing data access with Amazon S3 access points": "Amazon S3 access points are named network endpoints that are attached to buckets that you can use to perform S3 object operations... Each access point has distinct permissions and network controls that S3 applies for any request that is made through that access point." This document explicitly details how access points simplify managing access for shared datasets.

2. Amazon S3 User Guide, "Should I use a bucket policy or an access point policy?": "For a shared dataset with hundreds of applications, creating and managing a single bucket policy can be challenging. With S3 Access Points, you can create and manage application-specific policies without having to change the bucket policy." This directly supports using access points for multiple applications accessing a single bucket.

3. Amazon S3 User Guide, "Access control list (ACL) overview": "We recommend using S3 bucket policies or IAM policies for access control. Amazon S3 ACLs is a legacy access control mechanism...". This confirms that using ACLs (as suggested in option B) is not the recommended best practice.

4. Amazon S3 User Guide, "Replication": While replication is a powerful feature for data redundancy and geographic distribution, using it for access control as suggested in options C and D is an anti-pattern that increases cost and operational complexity, contradicting the question's core requirement.

Question 3

A company has released a new version of its production application The company's workload uses Amazon EC2. AWS Lambd a. AWS Fargate. and Amazon SageMaker. The company wants to cost optimize the workload now that usage is at a steady state. The company wants to cover the most services with the fewest savings plans. Which combination of savings plans will meet these requirements? (Select TWO.)
Options
A: Purchase an EC2 Instance Savings Plan for Amazon EC2 and SageMaker.
B: Purchase a Compute Savings Plan for Amazon EC2. Lambda, and SageMaker
C: Purchase a SageMaker Savings Plan
D: Purchase a Compute Savings Plan for Lambda, Fargate, and Amazon EC2
E: Purchase an EC2 Instance Savings Plan for Amazon EC2 and Fargate
Show Answer
Correct Answer:
Purchase a SageMaker Savings Plan, Purchase a Compute Savings Plan for Lambda, Fargate, and Amazon EC2
Explanation
The goal is to cover four distinct compute services (EC2, Lambda, Fargate, SageMaker) with the fewest possible savings plans. A Compute Savings Plan is the most flexible option, providing discounts on Amazon EC2, AWS Fargate, and AWS Lambda usage. This single plan efficiently covers three of the four required services. Amazon SageMaker usage is not covered by Compute Savings Plans. To gain savings on SageMaker, a dedicated SageMaker Savings Plan must be purchased. This plan applies specifically to SageMaker ML instance usage. Therefore, the combination of a Compute Savings Plan (for EC2, Fargate, Lambda) and a SageMaker Savings Plan (for SageMaker) covers all specified services with only two plans, meeting the requirements.
References

1. AWS Savings Plans User Guide, "What are Savings Plans?": This official guide provides a comparison table that explicitly states what each plan covers.

Compute Savings Plans apply to: "EC2 instances across regions, Fargate, and Lambda".

SageMaker Savings Plans apply to: "SageMaker instance usage".

EC2 Instance Savings Plans apply to: "EC2 instance family in a region".

This directly supports that a Compute SP covers EC2, Fargate, and Lambda, while a separate SageMaker SP is needed for SageMaker.

2. Amazon SageMaker Pricing, "SageMaker Savings Plans" section: "Amazon SageMaker Savings Plans offer a flexible, usage-based pricing model for Amazon SageMaker... These plans automatically apply to eligible SageMaker ML instance usage including SageMaker Studio notebooks, SageMaker On-Demand notebooks, SageMaker Processing, SageMaker Data Wrangler, SageMaker Training, SageMaker Real-Time Inference, and SageMaker Batch Transform." This confirms SageMaker requires its own dedicated savings plan.

3. AWS Compute Blog, "Introducing Compute Savings Plans": "Compute Savings Plans are a new and flexible pricing model that provide savings up to 66% on your AWS compute usage. These plans automatically apply to your Amazon EC2 instances, and your AWS Fargate and AWS Lambda usage." This source confirms the services covered by a Compute Savings Plan, notably excluding SageMaker.

Question 4

A company is designing an event-driven order processing system Each order requires multiple validation steps after the order is created. An independent AWS Lambda function performs each validation step. Each validation step is independent from the other validation steps Individual validation steps need only a subset of the order event information. The company wants to ensure that each validation step Lambda function has access to only the information from the order event that the function requires The components of the order processing system should be loosely coupled to accommodate future business changes. Which solution will meet these requirements?
Options
A: Create an Amazon Simple Queue Service (Amazon SQS> queue for each validation step. Create a new Lambda function to transform the order data to the format that each validation step requires and to publish the messages to the appropriate SQS queues Subscribe each validation step Lambda function to its corresponding SQS queue
B: Create an Amazon Simple Notification Service {Amazon SNS) topic. Subscribe the validation step Lambda functions to the SNS topic. Use message body filtering to send only the required data to each subscribed Lambda function.
C: Create an Amazon EventBridge event bus. Create an event rule for each validation step Configure the input transformer to send only the required data to each target validation step Lambda function.
D: Create an Amazon Simple Queue Service {Amazon SQS) queue Create a new Lambda function to subscribe to the SQS queue and to transform the order data to the format that each validation step requires. Use the new Lambda function to perform synchronous invocations of the validation step Lambda functions in parallel on separate threads.
Show Answer
Correct Answer:
Create an Amazon EventBridge event bus. Create an event rule for each validation step Configure the input transformer to send only the required data to each target validation step Lambda function.
Explanation
Amazon EventBridge is designed for building loosely coupled, event-driven architectures. An EventBridge event bus can receive the initial order event. You can then create a separate rule for each validation step. Each rule can filter events (if necessary) and route them to the appropriate target Lambda function. The key feature that meets the requirement is the Input Transformer. This feature allows you to customize the event payload sent to the target, extracting and reshaping only the necessary fields from the original order event. This ensures each validation Lambda receives only the subset of data it requires, adhering to the principle of least privilege while maintaining a decoupled design.
References

1. Amazon EventBridge User Guide, "Transforming event content with input transformation": This document explicitly describes the Input Transformer feature. It states, "You can use input transformation to customize the text from an event before you send it to the target of a rule... you can create a custom payload that includes only the information you want to pass to the target." This directly supports the chosen answer (C).

2. AWS Lambda Developer Guide, "Invoking Lambda functions": This guide details invocation types. For synchronous invocation (RequestResponse), it states, "When you invoke a function synchronously, Lambda runs the function and waits for a response." This waiting process creates a dependency, or tight coupling, which is contrary to the question's requirements and makes option (D) incorrect.

3. Amazon Simple Notification Service Developer Guide, "Amazon SNS message filtering": This guide explains that filter policies are applied to message attributes. It states, "By default, a subscription receives every message published to the topic. To receive a subset of the messages, a subscriber must assign a filter policy to the subscription." The examples clearly show policies matching against key-value pairs in the attributes, not the message body, making option (B) incorrect.

4. AWS Well-Architected Framework, "Decouple components": This design principle, part of the Reliability Pillar, advocates for architectures where components are loosely coupled. It states, "The failure of a single component should not cascade to other components." The synchronous invocation in option (D) and the central transformer in option (A) create tighter coupling than the event-routing pattern of EventBridge.

Question 5

A large international university has deployed all of its compute services in the AWS Cloud These services include Amazon EC2. Amazon RDS. and Amazon DynamoDB. The university currently relies on many custom scripts to back up its infrastructure. However, the university wants to centralize management and automate data backups as much as possible by using AWS native options. Which solution will meet these requirements?
Options
A: Use third-party backup software with an AWS Storage Gateway tape gateway virtual tape library.
B: Use AWS Backup to configure and monitor all backups for the services in use
C: Use AWS Config to set lifecycle management to take snapshots of all data sources on a schedule.
D: Use AWS Systems Manager State Manager to manage the configuration and monitoring of backup tasks.
Show Answer
Correct Answer:
Use AWS Backup to configure and monitor all backups for the services in use
Explanation
AWS Backup is a fully managed, policy-based service that centralizes and automates data protection across AWS services. It is the ideal native solution for this scenario as it supports Amazon EC2 instances, Amazon RDS databases, and Amazon DynamoDB tables. By creating backup plans, the university can define backup schedules, retention policies, and lifecycle rules from a single console. This eliminates the need for custom scripts and provides a centralized, automated way to manage and monitor backups, directly fulfilling the university's requirements.
References

1. AWS Backup Developer Guide: "What is AWS Backup?" - This section introduces AWS Backup as a "fully managed backup service that makes it easy to centralize and automate the backup of data across AWS services in the cloud and on premises." It explicitly lists Amazon EC2, Amazon RDS, and Amazon DynamoDB as supported services.

Source: AWS Backup Developer Guide, "What is AWS Backup?".

2. AWS Backup Product Page: "AWS Backup Features" - The documentation highlights "Centralized backup management" and "Policy-based backup" as key features, allowing users to "configure backup policies and monitor backup activity for your AWS resources in one place."

Source: AWS Backup official product page, "Features" section.

3. AWS Config Developer Guide: "What Is AWS Config?" - This guide states, "AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources." This confirms its purpose is configuration monitoring, not backup execution.

Source: AWS Config Developer Guide, "What Is AWS Config?".

4. AWS Systems Manager User Guide: "What is AWS Systems Manager?" - The documentation describes Systems Manager as the "operations hub for your AWS applications and resources," focusing on tasks like patch management and configuration management, not as a centralized data backup service.

Source: AWS Systems Manager User Guide, "What is AWS Systems Manager?".

Question 6

A company stores several petabytes of data across multiple AWS accounts The company uses AWS Lake Formation to manage its data lake The company's data science team wants to securely share selective data from its accounts with the company’s engineering team for analytical purposes. Which solution will meet these requirements with the LEAST operational overhead?
Options
A: Copy the required data to a common account. Create an IAM access role in that account Grant access by specifying a permission policy that includes users from the engineering team accounts as trusted entities.
B: Use the Lake Formation permissions Grant command in each account where the data is stored to allow the required engineering team users to access the data.
C: Use AWS Data Exchange to privately publish the required data to the required engineering team accounts
D: Use Lake Formation tag-based access control to authorize and grant cross-account permissions for the required data to the engineering team accounts
Show Answer
Correct Answer:
Use Lake Formation tag-based access control to authorize and grant cross-account permissions for the required data to the engineering team accounts
Explanation
AWS Lake Formation is designed to simplify building and managing data lakes, including secure, cross-account data sharing. The most efficient method to meet the requirements is using Lake Formation's tag-based access control (TBAC). This allows the data science team to assign tags (e.g., access-level:engineering) to specific databases, tables, or columns. They can then create a single grant policy that gives the engineering team's AWS account access to any resource with that specific tag. This approach is highly scalable, avoids data duplication, and significantly reduces the operational overhead of managing individual resource permissions, especially as the data lake grows.
References

1. AWS Lake Formation Developer Guide - Lake Formation tag-based access control: This document states, "Lake Formation tag-based access control (TBAC) is an authorization strategy that defines permissions based on attributes, which are called tags... This helps when you have a large number of data catalog resources and principals to manage." It also details how to grant cross-account permissions using tags. (See section: "Granting permissions on Data Catalog resources").

2. AWS Lake Formation Developer Guide - Sharing data across AWS accounts: This guide explains the two main methods for cross-account sharing: the Named Resource method and the Tag-Based Access Control (TBAC) method. It highlights TBAC as a scalable approach. (See section: "Cross-account data sharing in Lake Formation").

3. AWS Big Data Blog - Simplify and scale your data governance with AWS Lake Formation tag-based access control: This article provides a detailed walkthrough and states, "TBAC is a scalable way to manage permissions in AWS Lake Formation... With TBAC, you can grant permissions on Lake Formation resources to principals in the same account or other accounts..." (See section: "Cross-account sharing with TBAC").

Question 7

A company stores sensitive data in Amazon S3 A solutions architect needs to create an encryption solution The company needs to fully control the ability of users to create, rotate, and disable encryption keys with minimal effort for any data that must be encrypted. Which solution will meet these requirements?
Options
A: Use default server-side encryption with Amazon S3 managed encryption keys (SSE-S3) to store the sensitive data
B: Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
C: Create an AWS managed key by using AWS Key Management Service {AWS KMS) Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
D: Download S3 objects to an Amazon EC2 instance. Encrypt the objects by using customer managed keys. Upload the encrypted objects back into Amazon S3.
Show Answer
Correct Answer:
Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
Explanation
The requirement is for full control over the encryption key lifecycle (creation, rotation, disabling) with minimal effort. AWS Key Management Service (AWS KMS) with a customer managed key (CMK) is the only option that meets these criteria. Customer managed keys are created, owned, and managed directly by the customer, providing granular control over their policies, rotation schedules, and enabled/disabled status. Using server-side encryption with AWS KMS keys (SSE-KMS) integrates this control directly with Amazon S3, fulfilling the "minimal effort" requirement by offloading the encryption and decryption processes to AWS servers without needing a custom client-side solution.
References

1. AWS Key Management Service Developer Guide: Under "AWS KMS concepts," the guide distinguishes between key types. For customer managed keys, it states, "You have full control over these KMS keys, including establishing and maintaining their key policies, IAM policies, and grants, enabling and disabling them, rotating their cryptographic material..." In contrast, for AWS managed keys, it notes, "...you cannot change the properties of these KMS keys, rotate them, or change their key policies."

Source: AWS KMS Developer Guide, "AWS KMS concepts," Section: "KMS keys."

2. Amazon S3 User Guide: This guide details the different server-side encryption options. For SSE-KMS, it explains, "Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) is similar to SSE-S3, but with some additional benefits and charges for using this service. There are separate permissions for the use of a KMS key that provides an additional layer of control as well as an audit trail." This highlights the control and minimal effort of the integrated service.

Source: Amazon S3 User Guide, "Protecting data using server-side encryption," Section: "Using server-side encryption with AWS KMS keys (SSE-KMS)."

3. Amazon S3 User Guide: When describing SSE-S3, the documentation clarifies the lack of customer control: "When you use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3), each object is encrypted with a unique key. As an additional safeguard, it encrypts the key itself with a root key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data." The management and rotation are handled entirely by AWS.

Source: Amazon S3 User Guide, "Protecting data using server-side encryption," Section: "Using server-side encryption with Amazon S3-managed keys (SSE-S3)."

Question 8

A company runs an application that uses Amazon RDS for PostgreSQL The application receives traffic only on weekdays during business hours The company wants to optimize costs and reduce operational overhead based on this usage. Which solution will meet these requirements?
Options
A: Use the Instance Scheduler on AWS to configure start and stop schedules.
B: Turn off automatic backups. Create weekly manual snapshots of the database.
C: Create a custom AWS Lambda function to start and stop the database based on minimum CPU utilization.
D: Purchase All Upfront reserved DB instances
Show Answer
Correct Answer:
Use the Instance Scheduler on AWS to configure start and stop schedules.
Explanation
The most effective way to optimize costs for a resource with a predictable usage schedule, such as an Amazon RDS instance used only during business hours, is to stop it when it is not needed. When an RDS DB instance is stopped, you are not billed for instance hours, only for provisioned storage. The Instance Scheduler on AWS is an AWS-provided solution that automates the starting and stopping of Amazon EC2 and RDS instances on a defined schedule. This directly addresses the requirements to reduce costs based on the usage pattern and minimizes operational overhead by using a pre-built, managed solution.
References

1. Stopping an Amazon RDS DB instance temporarily: "While your DB instance is stopped, you are charged for provisioned storage... but not for DB instance hours."

Source: AWS Documentation, Amazon RDS User Guide, "Stopping an Amazon RDS DB instance temporarily", Section: "Billing for a stopped DB instance".

2. Instance Scheduler on AWS: "The Instance Scheduler on AWS is a solution that automates the starting and stopping of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon RDS) instances."

Source: AWS Solutions Library, Instance Scheduler on AWS, "Solution overview" section.

3. Amazon RDS Reserved Instances: "Amazon RDS Reserved Instances (RIs) give you the option to reserve a DB instance for a one- or three-year term and in turn receive a significant discount compared to the On-Demand Instance pricing for the DB instance." (This is ideal for steady-state usage).

Source: AWS Documentation, Amazon RDS User Guide, "Working with reserved DB instances", "Overview of reserved DB instances" section.

Question 9

A company recently migrated its web application to the AWS Cloud The company uses an Amazon EC2 instance to run multiple processes to host the application. The processes include an Apache web server that serves static content The Apache web server makes requests to a PHP application that uses a local Redis server for user sessions. The company wants to redesign the architecture to be highly available and to use AWS managed solutions Which solution will meet these requirements?
Options
A: Use AWS Elastic Beanstalk to host the static content and the PHP application. Configure Elastic Beanstalk to deploy its EC2 instance into a public subnet Assign a public IP address.
B: Use AWS Lambda to host the static content and the PHP application. Use an Amazon API Gateway REST API to proxy requests to the Lambda function. Set the API Gateway CORSconfiguration to respond to the domain name. Configure Amazon ElastiCache for Redis to handle session information
C: Keep the backend code on the EC2 instance. Create an Amazon ElastiCache for Redis cluster that has Multi-AZ enabled Configure the ElastiCache for Redis cluster in cluster mode Copy the frontend resources to Amazon S3 Configure the backend code to reference the EC2 instance
D: Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones
Show Answer
Correct Answer:
Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones
Explanation
This solution correctly decouples the application into three tiers (static content, application logic, session state) and uses the most appropriate AWS managed services for high availability. Amazon S3 with an Amazon CloudFront distribution is the best practice for hosting and accelerating static content globally. An Application Load Balancer distributing traffic to an Amazon ECS service using AWS Fargate tasks provides a scalable and highly available serverless compute layer for the PHP application across multiple Availability Zones. Finally, an Amazon ElastiCache for Redis cluster with Multi-AZ enabled provides a resilient, managed, and externalized session store, which is critical for a stateful, horizontally-scaled application.
References

1. Static Content Hosting: AWS Documentation, Amazon S3 User Guide, "Hosting a static website using Amazon S3". This guide details the standard practice of using S3 for static assets. The addition of CloudFront is a best practice for performance and availability, as described in the Amazon CloudFront Developer Guide, "Using CloudFront with an Amazon S3 origin".

2. Application Hosting: AWS Documentation, Amazon ECS Developer Guide, "What is Amazon Elastic Container Service?". This section explains how ECS with Fargate allows you to run containers without managing servers, and the Elastic Load Balancing User Guide details how an Application Load Balancer distributes traffic across targets (like Fargate tasks) in multiple Availability Zones for high availability.

3. Session Management: AWS Documentation, Amazon ElastiCache for Redis User Guide, "Minimizing downtime in ElastiCache with Multi-AZ". This document explains how enabling Multi-AZ provides enhanced high availability and automatic failover for the Redis cluster, making it suitable for critical session data.

4. Architectural Best Practices: AWS Whitepaper, AWS Well-Architected Framework, "Reliability Pillar". This whitepaper emphasizes designing for high availability by removing single points of failure and using managed services that offer built-in reliability, which aligns with the architecture proposed in option D.

Question 10

A company has an application that customers use to upload images to an Amazon S3 bucket Each night, the company launches an Amazon EC2 Spot Fleet that processes all the images that the company received that day. The processing for each image takes 2 minutes and requires 512 MB of memory. A solutions architect needs to change the application to process the images when the images are uploaded Which change will meet these requirements MOST cost-effectively?
Options
A: Use S3 Event Notifications to write a message with image details to an Amazon Simple Queue Service (Amazon SQS) queue. Configure an AWS Lambda function to read the messages from the queue and to process the images
B: Use S3 Event Notifications to write a message with image details to an Amazon Simple Queue Service (Amazon SQS) queue Configure an EC2 Reserved Instance to read the messages from the queue and to process the images.
C: Use S3 Event Notifications to publish a message with image details to an Amazon Simple Notification Service (Amazon SNS) topic. Configure a container instance in Amazon Elastic Container Service (Amazon ECS) to subscribe to the topic and to process the images.
D: Use S3 Event Notifications to publish a message with image details to an Amazon Simple Notification Service (Amazon SNS) topic. to subscribe to the topic and to process the images.
Show Answer
Correct Answer:
Use S3 Event Notifications to write a message with image details to an Amazon Simple Queue Service (Amazon SQS) queue. Configure an AWS Lambda function to read the messages from the queue and to process the images
Explanation
This solution is the most cost-effective because it employs a fully serverless, event-driven architecture. AWS Lambda's pricing model is based on the number of requests and the duration of execution, measured in milliseconds. Since the application only incurs costs when an image is actually being processed, there is no charge for idle time. This "pay-for-what-you-use" model is ideal for sporadic workloads like image uploads. Using Amazon SQS to queue the events from S3 provides a durable and reliable buffer, ensuring that events are not lost if the processing function fails, and allows for controlled, asynchronous processing by the Lambda function. The memory (512 MB) and time (2 minutes) requirements are well within Lambda's default limits.
References

1. AWS Lambda Developer Guide, "Using AWS Lambda with Amazon S3": This guide details the event-driven pattern. It states, "Amazon S3 can publish events... when an object is created... You can have S3 invoke a Lambda function when an event occurs." It also recommends a more robust architecture: "To ensure that events are processed successfully, you can configure S3 to send events to an Amazon Simple Queue Service (Amazon SQS) queue."

2. AWS Well-Architected Framework, Cost Optimization Pillar (July 2023): This whitepaper outlines the principle of "Adopt a consumption model" (p. 7). It advises, "Pay only for the computing resources that you require... For example, AWS Lambda is an event-driven compute service that you can use to run code for virtually any type of application or backend service, with zero administration and paying only for what you use." This directly supports choosing Lambda over a provisioned EC2 instance.

3. Amazon S3 User Guide, "Configuring Amazon S3 Event Notifications": This document confirms that S3 can be configured to send event notifications to destinations like an Amazon SQS queue, an Amazon SNS topic, or an AWS Lambda function. This validates the trigger mechanism proposed in the correct answer.

4. AWS Lambda Pricing: The official pricing page confirms the pay-per-use model. It states, "With AWS Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the duration... it takes for your code to execute." This is the core reason for its cost-effectiveness in this scenario.

Sale!
Total Questions1,169
Last Update Check October 02, 2025
Online Simulator PDF Downloads
50,000+ Students Helped So Far
$30.00 $60.00 50% off
Rated 5 out of 5
5.0 (18 reviews)

Instant Download & Simulator Access

Secure SSL Encrypted Checkout

100% Money Back Guarantee

What Users Are Saying:

Rated 5 out of 5

“The practice questions were spot on. Felt like I had already seen half the exam. Passed on my first try!”

Sarah J. (Verified Buyer)

Download Free Demo PDF Free SAA-C03 Practice Test
Shopping Cart
Scroll to Top

FLASH OFFER

Days
Hours
Minutes
Seconds

avail $6 DISCOUNT on YOUR PURCHASE