I don’t think it’s A, even though StreamViewType matters for what data you get in the stream, it won’t set up the trigger by itself. You still need to configure event source mapping so Lambda will actually poll the DynamoDB stream and invoke your function. B fixes that missing link. Pretty sure that’s what AWS expects here, but open to corrections if I’m missing something subtle.
Q: 11
A developer has been asked to create an AWS Lambda function that is invoked any time updates are
made to items in an Amazon DynamoDB table. The function has been created and appropriate
permissions have been added to the Lambda execution role Amazon DynamoDB streams have been
enabled for the table, but the function 15 still not being invoked.
Which option would enable DynamoDB table updates to invoke the Lambda function?
Options
Discussion
Definitely B. You need to set up an event source mapping so Lambda knows to poll the DynamoDB stream, otherwise it won't trigger. The official AWS developer guide is pretty clear about this step if you want more details.
A is the best choice, Secrets Manager is built for secure secret storage and you can grab the API key at runtime with minimal perf impact. S3 or DynamoDB aren't meant for secrets. Pretty sure official guide and AWS labs cover this.
Be respectful. No spam.
Q: 12
A company is offering APIs as a service over the internet to provide unauthenticated read access to
statistical information that is updated daily. The company uses Amazon API Gateway and AWS
Lambda to develop the APIs. The service has become popular, and the company wants to enhance
the responsiveness of the APIs.
Which action can help the company achieve this goal?
Options
Discussion
Makes sense to pick A. API caching in API Gateway is great for read-heavy, rarely-changing data since it cuts down on backend calls and speeds up responses. Not 100% but that's how I'd tackle this type of scenario.
C or A? Had something like this in a mock and pretty sure A is the way to go.
Its A, pretty sure. BatchGetItem lets you fetch from both tables in one call, so that cuts down network chattiness. The others either require extra queries or scans which isn't efficient. Bit confused if there's a smarter way but this matches what I've seen in practice exams. Anyone else get stuck on this?
A is right here, not D. Usage plans and API keys (D) are more about controlling access and throttling, not about speeding up response time. Only caching in API Gateway (A) actually improves responsiveness for read-heavy unauthenticated APIs. Unless I'm missing some edge case, it's A.
API caching gives you faster responses since data updates are only daily. A
D imo. People might pick C by mistake because CORS is about access, but responsiveness will improve way more with caching (A).
A tbh
Be respectful. No spam.
Q: 13
A company has an application that consists of different microservices that run inside an AWS
account. The microservices are running in containers inside a single VPC. The number of
microservices is constantly increasing. A developer must create a central logging solution for
application logs.
Options
Discussion
C doesn’t actually log app-level data, just VPC network flows. Has to be A for application logs specifically.
A is the way to go here. Only CloudWatch Logs gives you proper app-level log centralization for microservices, especially when containers are involved. C just tracks network flows, not actual logs, and D’s for service discovery, not logging. Pretty sure about this but happy to hear other logic if anyone disagrees.
D imo, since AWS Cloud Map handles service discovery and maps microservices. Maybe a trap here with A but feels like D fits centralizing their interactions.
A
C vs D? Not sure which one is right, kind of looks similar to me. Can someone confirm?
Be respectful. No spam.
Q: 14
A company caches session information for a web application in an Amazon DynamoDB table. The
company wants an automated way to delete old items from the table.
What is the simplest way to do this?
Options
Discussion
B. that's what the official AWS docs and exam guides both suggest for this scenario. TTL attribute plus enabling the feature means no scripts needed. I saw similar on practice tests, pretty sure that's what they're after.
Why not just use TTL? Option B mentions enabling it with an expiration attribute, which sounds like the built-in way.
B had something like this in a mock and TTL is the easiest way to handle it.
B tbh, DynamoDB TTL makes this super easy without extra scripts or weird table tricks.
D imo, matches what I've seen in similar practice sets. Question is really clear about rollback needs.
Be respectful. No spam.
Q: 15
A company is running Amazon EC2 instances in multiple AWS accounts. A developer needs to
implement an application that collects all the lifecycle events of the EC2 instances. The application
needs to store the lifecycle events in a single Amazon Simple Queue Service (Amazon SQS) queue in
the company's main AWS account for further processing.
Which solution will meet these requirements?
Options
Discussion
Seriously, AWS makes this more complicated than it should be. Probably D.
D imo, B is tempting but event bus aggregation is standard for cross-account collection. Open to being wrong though.
Its D because that's the standard for cross-account event collection. You set permissions on the main account's EventBridge, then rules in each account forward EC2 lifecycle events there, and from there to SQS. Pretty sure AWS docs back this up, unless I missed some new feature.
C or B. Aliases let you control weights but I thought you could also do weighted routing in API Gateway stages, so B might work depending on setup. Not 100% though, anyone see API Gateway use that way?
Option A and C make sense here. You need to publish a new version first (A), then set up weighted routing using an alias (C) to send 10% of traffic to the updated code. Pretty sure that's the needed combo, unless I missed something.
Be respectful. No spam.
Q: 16
An 1AM role is attached to an Amazon EC2 instance that explicitly denies access to all Amazon S3 API
actions. The EC2 instance credentials file specifies the 1AM access key and secret access key, which
allow full administrative access.
Given that multiple modes of 1AM access are present for this EC2 instance, which of the following is
correct?
Options
Discussion
Option C, The trap here is thinking the instance profile deny overrides everything, but if there are admin keys in the credentials file they'll get used first (provider chain order). So full S3 access still works unless you remove those keys. Seen this come up on other practice exams, too.
Makes sense to go with B. Lambda@Edge requires the function to be created in us-east-1, no matter where the CloudFront or other resources are. Pretty sure that’s what causes the stack failure here. Agree?
B imo. Lambda@Edge deploys have to be in us-east-1 even if your stack is in another region. Seen this issue mentioned before.
C tbh, trap is thinking the instance role's deny wins but creds file takes priority. Seen similar in exam reports.
Be respectful. No spam.
Q: 17
A developer previously deployed an AWS Lambda function as a .zip package. The developer needs to
deploy the Lambda function as a container.
Options
Discussion
B, not C. Had something like this in a mock and it's definitely A for Lambda logging context.
Not quite D, I think A is correct for AWS Lambda container deployment. You have to update the existing function's config with both the ECR repo URI and image tag. D forgets the tag, which will cause issues in practice. Similar question came up in my practice set.
Pretty sure it’s A. You need to build the image, push to Amazon ECR, then update the Lambda config to reference the ECR repo URI and tag. That matches real AWS container-based Lambda deployment steps. Let me know if I'm missing something.
Be respectful. No spam.
Q: 18
A developer is working on an ecommerce application that stores data in an Amazon RDS for MySQL
cluster The developer needs to implement a caching layer for the application to retrieve information
about the most viewed products.
Which solution will meet these requirements?
Options
Discussion
B . ElastiCache (Redis) is meant for a caching layer, especially for read-heavy data like most viewed products. D just adds a standby for failover, doesn't cache anything. The DynamoDB DAX option is a trap since it's not for MySQL/RDS. Pretty sure B fits the scenario best.
B or C? Had something like this in a mock, stuck between those two.
C vs B? Always see AWS pushing DAX in practice tests for these caching layers, even though it's really for DynamoDB not MySQL. Wouldn't rule C work here too?
Seen similar on official practice, pretty sure it's B here.
I’d say B since ElastiCache Redis is built exactly for this use case, not DAX and not a standby replica.
B or D? D feels right if they want high availability, but I think it's a common trap here.
D imo. Provisioned concurrency is made for this, since it keeps Lambda environments pre-warmed so you avoid unpredictable cold start delays. Pretty sure that's the only way to guarantee setup happens before invocation-saw this asked in some exam reports too.
Be respectful. No spam.
Q: 19
A developer is creating an application that will give users the ability to store photos from their
cellphones in the cloud. The application needs to support tens of thousands of users. The application
uses an Amazon API Gateway REST API that is integrated with AWS Lambda functions to process the
photos. The application stores details about the photos in Amazon DynamoDB.
Users need to create an account to access the application. In the application, users must be able to
upload photos and retrieve previously uploaded photos. The photos will range in size from 300 KB to
5 MB.
Which solution will meet these requirements with the LEAST operational overhead?
Options
Discussion
B not A. Had something like this in a mock before and DynamoDB can't handle files that big, so S3 is the way to go for actual photo storage. Cognito plus API Gateway keeps it serverless and low maintenance. Anyone disagree?
Be respectful. No spam.
Q: 20
A development learn has an Amazon API Gateway REST API that is backed by an AWS Lambda
function.
Users have reported performance issues for the Lambda function. The development team identified
the source of the issues as a cold start of the Lambda function. The development team needs to
reduce the time needed for the Lambda function to initialize.
Which solution will meet this requirement?
Options
Discussion
D , C is tempting but doesn't really prevent cold starts, just speeds them up a tiny bit. Provisioned concurrency (D) actually handles the cold start directly. Seen similar on practice tests.
C doesn't really solve cold starts, just makes the function run faster after it starts. D is the way to go because provisioned concurrency pre-warms Lambda instances. Pretty sure that's what AWS recommends for this scenario, but open to other views.
D is the right call. Provisioned concurrency actually keeps Lambda warm, so cold starts get eliminated. C (upping memory) helps a bit but doesn’t guarantee a warm start like D. Pretty sure that’s what AWS exams expect.
D imo, storing creds in an encrypted .txt on S3 should work since S3 can encrypt files and you can control access. I don't see why you'd need Parameter Store here. Anyone else thinks D could fit?
Be respectful. No spam.
Question 11 of 20 · Page 2 / 2