Q: 1
A development team is using AWS CodeCommit to version control application code and AWS
CodePipeline to orchestrate software deployments. The team has decided to use a remote main
branch as the trigger for the pipeline to integrate code changes. A developer has pushed code
changes to the CodeCommit repository, but noticed that the pipeline had no reaction, even after 10
minutes.
Which of the following actions should be taken to troubleshoot this issue?
Options
Discussion
Probably A for this one. No EventBridge rule on the main branch and CodePipeline just stays idle, seen that before.
A makes more sense: The pipeline needs an EventBridge rule (used to be CloudWatch Events) to trigger on changes to the main branch. If it's missing or misconfigured, pushes won't start the pipeline. I've seen similar issues on practice tests. Pretty sure A is the right troubleshooting step here, but happy if anyone thinks otherwise.
A imo. Saw a similar question in exam reports, EventBridge rule missing for main branch always blocks automatic pipeline triggers.
Be respectful. No spam.
Q: 2
A company uses an AWS CodeArtifact repository to store Python packages that the company
developed internally. A DevOps engineer needs to use AWS CodeDeploy to deploy an application to
an Amazon EC2 instance. The application uses a Python package that is stored in the CodeArtifact
repository. A BeforeInstall lifecycle event hook will install the package.
The DevOps engineer needs to grant the EC2 instance access to the CodeArtifact repository.
Which solution will meet this requirement?
Options
Discussion
Option D again with more AWS hoops to jump through just for package access, but that's their typical stance.
D , that's the usual AWS playbook-instance profile with the needed IAM perms and then run aws codeartifact login on the EC2. Not seeing any scenario where C or B would actually apply here.
Probably D since you need an instance profile with the right IAM role for CodeArtifact. Resource-based policies (like B) don't let EC2 instances access like that. Not 100% but that's AWS's usual pattern here.
B tbh, I remember seeing similar logic in some practice exams and the official guide. Official whitepapers are also worth reviewing.
Be respectful. No spam.
Q: 3
A DevOps team uses AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy to deploy an
application. The application is a REST API that uses AWS Lambda functions and Amazon API Gateway
Recent deployments have introduced errors that have affected many customers.
The DevOps team needs a solution that reverts to the most recent stable version of the application
when an error is detected. The solution must affect the fewest customers possible.
Which solution Will meet these requirements With the MOST operational efficiency?
Options
Discussion
Ugh these AWS options get so wordy. B, canary + automatic rollback is what the exam wants here.
Probably B, since canary deployments with automatic rollback make rollbacks smoother and reduce impact. Pretty sure that's how the official guide recommends handling Lambda updates too. Anyone have other sources or labs saying different?
Be respectful. No spam.
Q: 4
A company is building a new pipeline by using AWS CodePipeline and AWS CodeBuild in a build
account. The pipeline consists of two stages. The first stage is a CodeBuild job to build and package
an AWS Lambda function. The second stage consists of deployment actions that operate on two
different AWS accounts a development environment account and a production environment account.
The deployment stages use the AWS Cloud Format ion action that CodePipeline invokes to deploy
the infrastructure that the Lambda function requires.
A DevOps engineer creates the CodePipeline pipeline and configures the pipeline to encrypt build
artifacts by using the AWS Key Management Service (AWS KMS) AWS managed key for Amazon S3
(the aws/s3 key). The artifacts are stored in an S3 bucket When the pipeline runs, the Cloud
Formation actions fail with an access denied error.
Which combination of actions must the DevOps engineer perform to resolve this error? (Select
TWO.)
Options
Discussion
B/E? Using a customer managed KMS key (B) makes sense for controlling decrypt permissions, and E nails the IAM role plus S3 bucket policy part for cross-account CloudFormation. That's usually what AWS recommends, I think. Let me know if you see it differently!
Maybe B and E. Letting CloudFormation decrypt artifacts with a customer managed KMS key (B) is key here, and E covers bucket policy plus cross-account IAM setup for CloudFormation actions. Saw a similar scenario in practice exams, this lines up with best-practice permissions. Clear question layout!
Be respectful. No spam.
Q: 5
A DevOps engineer has automated a web service deployment by using AWS CodePipeline with the
following steps:
1) An AWS CodeBuild project compiles the deployment artifact and runs unit tests.
2) An AWS CodeDeploy deployment group deploys the web service to Amazon EC2 instances in the
staging environment.
3) A CodeDeploy deployment group deploys the web service to EC2 instances in the production
environment.
The quality assurance (QA) team requests permission to inspect the build artifact before the
deployment to the production environment occurs. The QA team wants to run an internal
penetration testing tool to conduct manual tests. The tool will be invoked by a REST API call.
Which combination of actions should the DevOps engineer take to fulfill this request? (Choose two.)
Options
Discussion
A is wrong, D. Had something like this in a mock, the pipeline can just call the REST API directly so Lambda isn’t really needed. Pretty sure D covers the automation part, but not 100% on A.
A and E imo, saw a similar question in an exam report. Manual approval for QA and Lambda to trigger their tool fits.
Be respectful. No spam.
Q: 6
A company has deployed an application in a production VPC in a single AWS account. The application
is popular and is experiencing heavy usage. The company’s security team wants to add additional
security, such as AWS WAF, to the application deployment. However, the application's product
manager is concerned about cost and does not want to approve the change unless the security team
can prove that additional security is necessary.
The security team believes that some of the application's demand might come from users that have
IP addresses that are on a deny list. The security team provides the deny list to a DevOps engineer. If
any of the IP addresses on the deny list access the application, the security team wants to receive
automated notification in near real time so that the security team can document that the application
needs additional security. The DevOps engineer creates a VPC flow log for the production VPC.
Which set of additional steps should the DevOps engineer take to meet these requirements MOST
cost-effectively?
Options
Discussion
Option A. Similar practice questions point to CloudWatch Logs with a metric filter as the fastest and cheapest alerting option for this scenario.
Be respectful. No spam.
Q: 7
A company that runs many workloads on AWS has an Amazon EBS spend that has increased over
time. The DevOps team notices there are many unattached
EBS volumes. Although there are workloads where volumes are detached, volumes over 14 days old
are stale and no longer needed. A DevOps engineer has been tasked with creating automation that
deletes unattached EBS volumes that have been unattached for 14 days.
Which solution will accomplish this?
Options
Discussion
Pretty sure I ran into a similar one in exam mentioned in recent exam reports, most picks went with C for the Lambda tagging and delete approach.
Probably C. Data Lifecycle Manager (B) doesn't support deleting unattached EBS volumes, that's a common trap option. The Lambda with tagging logic in C is the way most real-world automation handles this, pretty sure. Happy to be corrected though.
Its C, Data Lifecycle Manager (B) doesn't actually handle unattached EBS volumes so that's misleading.
B tbh, saw a similar question in exam reports with Data Lifecycle Manager as the answer.
Be respectful. No spam.
Q: 8
A company has multiple development teams in different business units that work in a shared single
AWS account All Amazon EC2 resources that are created in the account must include tags that specify
who created the resources. The tagging must occur within the first hour of resource creation.
A DevOps engineer needs to add tags to the created resources that Include the user ID that created
the resource and the cost center ID The DevOps engineer configures an AWS Lambda function With
the cost center mappings to tag the resources. The DevOps engineer also sets up AWS CloudTrail in
the AWS account. An Amazon S3 bucket stores the CloudTrail event logs
Which solution will meet the tagging requirements?
Options
Discussion
D . EventBridge lets you catch those EC2 API calls from CloudTrail in near real time so tagging happens quickly and automatically. The other options don’t hook directly into resource creation events the way D does. Pretty sure this is the most efficient solution, but open to seeing if someone had luck with C.
I think this is same as a common exam questions, in AWS practice sets. D matches the tagging and automation requirements best.
Its D since EventBridge can take CloudTrail EC2 events right as they happen and send them to Lambda, so you get tagging almost immediately. C is tempting but introduces delay (since it's only hourly) and might miss the 1-hour window. A and B are just S3 events, not relevant for EC2 resource creation. Pretty sure D meets the timing and automation requirement best, unless AWS changed something.
Maybe D here. EventBridge can directly catch EC2 events from CloudTrail and trigger the Lambda fast, so tags get added within the hour. C looks possible but probably slower since it scans logs every hour. Correct me if I'm missing something.
Be respectful. No spam.
Q: 9
A production account has a requirement that any Amazon EC2 instance that has been logged in to
manually must be terminated within 24 hours. All applications in the production account are using
Auto Scaling groups with the Amazon CloudWatch Logs agent configured.
How can this process be automated?
Options
Discussion
D looks right, but only if you don't need to track login method. If you had to react to console logins only, you might need extra parsing and maybe A would edge ahead. Here, D hits all requirements assuming logs are properly ingested.
Had something like this in a mock and picked D. Only D wires up the whole thing automatically: Lambda gets triggered by log subscription, tags the instance, EventBridge rule handles termination without any manual steps. Pretty sure that's what AWS wants here, but open if someone thinks otherwise.
A is wrong, D. Only D fully automates the process without needing humans, tagging and cleanup both handled by Lambda/EventBridge.
D tbh. B is a trap since it needs manual intervention.
B
Be respectful. No spam.
Q: 10
A company is using AWS CodePipeline to automate its release pipeline. AWS CodeDeploy is being
used in the pipeline to deploy an application to Amazon Elastic Container Service (Amazon ECS) using
the blue/green deployment model. The company wants to implement scripts to test the green
version of the application before shifting traffic. These scripts will complete in 5 minutes or less. If
errors are discovered during these tests, the application must be rolled back.
Which strategy will meet these requirements?
Options
Discussion
Its C. Exam guides and AWS docs both highlight AfterAllowTestTraffic for blue/green validation steps like this.
Probably C. AfterAllowTestTraffic is the right lifecycle hook for testing the green environment before any production traffic, and it supports rollback if your Lambda exits with an error. D is a common trap but it's too late in the process (after all traffic is shifted).
For this scenario, wouldn't C be the best fit? Using the AfterAllowTestTraffic hook in the AppSpec lets you run validation scripts right after test traffic goes to the green environment, before shifting production traffic. If tests fail, CodeDeploy can roll back automatically. That matches exactly with blue/green deployment patterns in ECS. Pretty sure this is what AWS recommends too but open if someone spots a gotcha.
Be respectful. No spam.
Question 1 of 20 · Page 1 / 2