Q: 1
A development team is using AWS CodeCommit to version control application code and AWS
CodePipeline to orchestrate software deployments. The team has decided to use a remote main
branch as the trigger for the pipeline to integrate code changes. A developer has pushed code
changes to the CodeCommit repository, but noticed that the pipeline had no reaction, even after 10
minutes.
Which of the following actions should be taken to troubleshoot this issue?
Options
Discussion
Option A D is a trap since no logs if the event never triggers.
I see the logic, but wouldn't B be possible too? If the pipeline role can't access CodeCommit, it wouldn't run either.
A . If the pipeline isn't triggering at all, it's usually because the EventBridge rule for the branch isn't set up or is misconfigured. D is more for when a failure happens after a trigger. Pretty sure on this but open to other thoughts.
A imo
A or D? But if the pipeline doesn't react at all (not even a failed start), that's almost always a missing or misconfigured EventBridge rule for that branch. CloudWatch logs (D) only help if something triggered. Seen similar in other exam reports.
B
A imo. If CodePipeline isn't reacting at all when code is pushed, usually means the event trigger (EventBridge rule) is missing for that branch. The pipeline role permissions (B) would only matter if it tries to run and then fails, but here nothing starts. Pretty sure it's A, open to other explanations if someone disagrees.
Makes most sense to check A here.
Probably A for this one. No EventBridge rule on the main branch and CodePipeline just stays idle, seen that before.
A makes more sense: The pipeline needs an EventBridge rule (used to be CloudWatch Events) to trigger on changes to the main branch. If it's missing or misconfigured, pushes won't start the pipeline. I've seen similar issues on practice tests. Pretty sure A is the right troubleshooting step here, but happy if anyone thinks otherwise.
Be respectful. No spam.
Q: 2
A company uses an AWS CodeArtifact repository to store Python packages that the company
developed internally. A DevOps engineer needs to use AWS CodeDeploy to deploy an application to
an Amazon EC2 instance. The application uses a Python package that is stored in the CodeArtifact
repository. A BeforeInstall lifecycle event hook will install the package.
The DevOps engineer needs to grant the EC2 instance access to the CodeArtifact repository.
Which solution will meet this requirement?
Options
Discussion
Option D again with more AWS hoops to jump through just for package access, but that's their typical stance.
Maybe B, since a resource-based policy can allow access from specific principals, and EC2 does have an identity. I think it feels more straightforward to just grant read permissions this way, but not 100% sure if CodeArtifact supports that for EC2 directly. D is definitely the official IAM play, though.
D . Using an instance profile with the right IAM permissions is the typical way for EC2 to hit CodeArtifact, plus aws codeartifact login. Not seeing how B would work for a direct EC2 principal. Anyone disagree?
D , since EC2 principal needs an instance profile unless the repo is accessed by something outside the account.
D. Resource-based policy (B) is a bit of a trap here-CodeArtifact doesn’t let you grant direct EC2 access that way. Instance profile plus aws codeartifact login is what AWS expects for this scenario, pretty sure. Disagree?
Looks like D is right, but I could see people picking B since resource-based policies sometimes come up on these kinds of questions. Not fully sure though, AWS wording gets tricky with CodeArtifact.
D here. Instance profile with the right IAM role is standard for EC2 to pull from CodeArtifact, then just use aws codeartifact login. Haven't seen ACLs used with CodeArtifact, so C doesn't work. Pretty sure this lines up, but let me know if I missed something.
On the EC2 principal question, does AWS even let you target an instance directly in a CodeArtifact resource policy, or would that only work for things like Lambda? Just want to double-check the use of B.
C/D? Not 100%, question leans D but B seems possible for some setups.
D , saw this covered in the official AWS guide and on practice tests too.
Be respectful. No spam.
Q: 3
A DevOps team uses AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy to deploy an
application. The application is a REST API that uses AWS Lambda functions and Amazon API Gateway
Recent deployments have introduced errors that have affected many customers.
The DevOps team needs a solution that reverts to the most recent stable version of the application
when an error is detected. The solution must affect the fewest customers possible.
Which solution Will meet these requirements With the MOST operational efficiency?
Options
Discussion
B . That canary config with auto rollback is shown a lot in the official guide and practice exams for limiting customer impact. If you want to go deeper, AWS whitepapers or labs are helpful for this topic.
Its B here. Canary10Percent10Minutes limits how many users get the buggy version if something goes wrong, and automatic rollback kicks in fast with the right CloudWatch alarm. Saw this setup in some practice sets and it matches AWS best practices for efficiency. Anyone see a reason to go with A instead? Pretty sure B is what they want.
B auto rollback with canary is most efficient here.
Maybe B. Similar question came up in my practice set, and canary with auto rollback was the right approach there too.
A is better, not B
B tbh.
Ugh these AWS options get so wordy. B, canary + automatic rollback is what the exam wants here.
Hmm, I get why folks pick A since AllAtOnce with auto rollback sounds fast, but that means every user gets hit if there's a bad deploy. Pretty sure B is better for limiting customer impact-canary plus automatic rollback. Let me know if I’m missing something though.
B makes sense here. Canary deployments like LambdaCanary10Percent10Minutes limit exposure since only a small chunk of users see the new version before full rollout. Auto rollback based on CloudWatch alarms means errors revert fast and with minimal impact. Pretty sure that fits "operational efficiency" best, but open to arguments if someone strongly prefers A.
I don't think it's B. A. Canary can miss immediate issues, and AllAtOnce with auto rollback still reverts fast.
Be respectful. No spam.
Q: 4
A company is building a new pipeline by using AWS CodePipeline and AWS CodeBuild in a build
account. The pipeline consists of two stages. The first stage is a CodeBuild job to build and package
an AWS Lambda function. The second stage consists of deployment actions that operate on two
different AWS accounts a development environment account and a production environment account.
The deployment stages use the AWS Cloud Format ion action that CodePipeline invokes to deploy
the infrastructure that the Lambda function requires.
A DevOps engineer creates the CodePipeline pipeline and configures the pipeline to encrypt build
artifacts by using the AWS Key Management Service (AWS KMS) AWS managed key for Amazon S3
(the aws/s3 key). The artifacts are stored in an S3 bucket When the pipeline runs, the Cloud
Formation actions fail with an access denied error.
Which combination of actions must the DevOps engineer perform to resolve this error? (Select
TWO.)
Options
Discussion
Yeah, BE makes sense here. The aws/s3 managed key doesn't let you do cross-account decrypt, so option B's custom KMS key is needed. E is about updating the S3 bucket policy for those external roles. Pretty sure that's the fix but open to input if anyone disagrees.
B and E
Its BE, managed keys like aws/s3 can’t do cross-account so B is needed, not C.
BE tbh. You need a customer managed KMS key so you can grant decrypt to the roles (B), and updating the S3 bucket policy for those cross-account IAM roles is covered in E. AWS managed KMS keys like aws/s3 don't support cross-account decryption. Pretty sure this is the correct combo but open to corrections.
B/E? Using a customer managed KMS key (B) makes sense for controlling decrypt permissions, and E nails the IAM role plus S3 bucket policy part for cross-account CloudFormation. That's usually what AWS recommends, I think. Let me know if you see it differently!
Maybe B and E. Letting CloudFormation decrypt artifacts with a customer managed KMS key (B) is key here, and E covers bucket policy plus cross-account IAM setup for CloudFormation actions. Saw a similar scenario in practice exams, this lines up with best-practice permissions. Clear question layout!
B and E imo. Customer-managed KMS key is needed for cross-account decrypt, not the default aws/s3 one, so that's B. E covers updating the bucket policy to give those roles access. Seen this in official guides and labs, pretty sure these are both required. Anyone think something else applies?
Why does E work over D here if both use roles and permissions?
Its B and E here. The aws/s3 managed key doesn’t support cross-account decrypt, so you need a customer-managed KMS key (B). Then E is about setting up the S3 bucket policy for those cross-account roles. Seen this setup before, that’s usually what fixes it.
Option B and E, classic AWS gotcha with managed KMS keys not allowing cross-account decrypt. Gotta swap in a customer-managed key and update the bucket policy for those roles. Happens a lot in practice exams, pretty sure that's right.
Be respectful. No spam.
Q: 5
A DevOps engineer has automated a web service deployment by using AWS CodePipeline with the
following steps:
1) An AWS CodeBuild project compiles the deployment artifact and runs unit tests.
2) An AWS CodeDeploy deployment group deploys the web service to Amazon EC2 instances in the
staging environment.
3) A CodeDeploy deployment group deploys the web service to EC2 instances in the production
environment.
The quality assurance (QA) team requests permission to inspect the build artifact before the
deployment to the production environment occurs. The QA team wants to run an internal
penetration testing tool to conduct manual tests. The tool will be invoked by a REST API call.
Which combination of actions should the DevOps engineer take to fulfill this request? (Choose two.)
Options
Discussion
Labs on CodePipeline approvals and Lambda triggers helped with this one. A, E.
Its D and C. For D, the pipeline can hit the penetration testing tool REST API directly, doesn't seem like Lambda is needed here. C could fit since CodeDeploy groups allow hooks for manual steps. Not totally sure though if I missed something with CodePipeline stages.
A and E work best. Manual approval (A) gives QA a checkpoint, and E (Lambda) is needed to trigger the penetration test API since CodePipeline can't natively hit REST endpoints. Pretty sure that's correct but let me know if anyone thinks otherwise.
Ok, A and E for this. Manual approval gives QA a hold point, and using Lambda (E) is needed because CodePipeline can't hit REST APIs natively. Pretty sure that's the intended combo here, but correct me if I'm off!
B. not D. Official AWS study guide and lab practice questions cover manual approvals and Lambda integrations in CodePipeline, so I'm sticking with A and E.
A is wrong, D. Had something like this in a mock, the pipeline can just call the REST API directly so Lambda isn’t really needed. Pretty sure D covers the automation part, but not 100% on A.
A and E imo, saw a similar question in an exam report. Manual approval for QA and Lambda to trigger their tool fits.
B tbh, but only because I saw a similar question on a practice exam where B was correct. Not 100% sure here though.
A and E. Manual approval lets QA pause/review, Lambda for the API call since CodePipeline can't hit REST directly. Ran into this in some official practice sets, so pretty sure that's it. Open to other takes if I'm missing something.
Option D. saw something similar in a practice test and the official guide covers API triggers.
Be respectful. No spam.
Q: 6
A company has deployed an application in a production VPC in a single AWS account. The application
is popular and is experiencing heavy usage. The company’s security team wants to add additional
security, such as AWS WAF, to the application deployment. However, the application's product
manager is concerned about cost and does not want to approve the change unless the security team
can prove that additional security is necessary.
The security team believes that some of the application's demand might come from users that have
IP addresses that are on a deny list. The security team provides the deny list to a DevOps engineer. If
any of the IP addresses on the deny list access the application, the security team wants to receive
automated notification in near real time so that the security team can document that the application
needs additional security. The DevOps engineer creates a VPC flow log for the production VPC.
Which set of additional steps should the DevOps engineer take to meet these requirements MOST
cost-effectively?
Options
Discussion
Option A. Similar practice questions point to CloudWatch Logs with a metric filter as the fastest and cheapest alerting option for this scenario.
Yeah, makes sense to pick A. CloudWatch metric filters plus SNS is about as cost-efficient and quick as it gets for alerting on VPC flow logs. The other options have way more moving parts. If someone found a cheaper way, let me know!
Option A Metric filters in CloudWatch are super cheap and the alerts are basically instant. I think that's why A makes sense here for cost and speed.
Probably A since CloudWatch metric filters plus SNS alerts are both quick and low-cost. B and C involve Athena or OpenSearch, which adds unnecessary complexity and more charges. Trap is going for analytics instead of direct alerting. Open if someone disagrees, but that's how I see it.
A tbh, CloudWatch metric filters plus alarms are the cheapest and fastest way to hit that "near real-time" alert requirement. Athena/OpenSearch add more cost and lag. Pretty sure that's the best fit for what they're asking here, but open to other ideas.
A
A . Metric filters in CloudWatch let you scan for specific IPs and alert fast with minimal setup, especially for accepted traffic. Cost is low since you avoid S3, Athena or OpenSearch overhead. If they needed all traffic or longer retention, maybe a different answer, but here A is the clear win imo.
A. not D. CloudWatch metric filters are a common exam answer here since they're cheaper and near real-time. D looks tempting but it's more costly and adds extra moving parts. Pretty sure A is the best fit unless I've missed something.
A , everything with Athena or OpenSearch (like B and C) costs more and adds delay. Pretty sure the CloudWatch metric filter in A is the AWS go-to for fast, cheap alerts here. If I'm missing something subtle let me know.
A CloudWatch metric filters are built for this kind of alerting and don't hit the costs or lag you'd get with S3 plus Athena or OpenSearch. It's super common for security teams to wire up VPC flow logs to CloudWatch, set up a filter, and trigger SNS right away. Pretty sure that's what AWS best practices suggest here. If anyone has tried B and found it cheaper, let me know!
Be respectful. No spam.
Q: 7
A company that runs many workloads on AWS has an Amazon EBS spend that has increased over
time. The DevOps team notices there are many unattached
EBS volumes. Although there are workloads where volumes are detached, volumes over 14 days old
are stale and no longer needed. A DevOps engineer has been tasked with creating automation that
deletes unattached EBS volumes that have been unattached for 14 days.
Which solution will accomplish this?
Options
Discussion
C . CloudWatch + Lambda lets you actually target unattached volumes, while B is a common trap since Data Lifecycle Manager doesn't work for unattached EBS. Easy to miss that detail, but I've seen similar in exam prep.
Its C no question
Gotta agree with C here. Lambda plus CloudWatch Events can automate both tagging and cleanup, which is exactly what's needed if you want volumes gone after being unattached for 14 days. Data Lifecycle Manager (B) can't do this for unattached EBS volumes-I've checked docs before. If I'm missing something, let me know!
Pretty sure I ran into a similar one in exam mentioned in recent exam reports, most picks went with C for the Lambda tagging and delete approach.
Probably C. Data Lifecycle Manager (B) doesn't support deleting unattached EBS volumes, that's a common trap option. The Lambda with tagging logic in C is the way most real-world automation handles this, pretty sure. Happy to be corrected though.
C tbh. Lambda triggered by CloudWatch Events is the standard way to automate this kind of cleanup for unattached EBS volumes.
B for me since Data Lifecycle Manager deals with EBS lifecycle, so seems logical to use it for scheduling deletions. Not 100% sure if it supports unattached volumes directly, but that's the closest fit I see.
C . Data Lifecycle Manager (B) can’t handle unattached EBS volumes, which trips up a lot of people. The Lambda tagging method in C fits the automation need. Not 100% sure if there’s a new AWS feature for this yet, correct me if so.
Not sure B works here, Data Lifecycle Manager won't delete unattached EBS volumes. C is the safer pick in this scenario.
B or C? Data Lifecycle Manager in B seems like it should handle this, but maybe I'm missing something in the docs. Easy to pick B by mistake if you don't remember its limitations.
Be respectful. No spam.
Q: 8
A company has multiple development teams in different business units that work in a shared single
AWS account All Amazon EC2 resources that are created in the account must include tags that specify
who created the resources. The tagging must occur within the first hour of resource creation.
A DevOps engineer needs to add tags to the created resources that Include the user ID that created
the resource and the cost center ID The DevOps engineer configures an AWS Lambda function With
the cost center mappings to tag the resources. The DevOps engineer also sets up AWS CloudTrail in
the AWS account. An Amazon S3 bucket stores the CloudTrail event logs
Which solution will meet the tagging requirements?
Options
Discussion
D . EventBridge lets you catch those EC2 API calls from CloudTrail in near real time so tagging happens quickly and automatically. The other options don’t hook directly into resource creation events the way D does. Pretty sure this is the most efficient solution, but open to seeing if someone had luck with C.
D . EventBridge picks up the EC2 API calls from CloudTrail almost instantly and triggers Lambda, so tags are added within the first hour every time. C could work but would be slower and might not tag fast enough. Pretty sure D is what AWS recommends for this flow. Anyone see a scenario where C might fit better?
D . EventBridge hooks right into the CloudTrail events so tags get added right after EC2 creation, not waiting for a schedule.
Probably D here. EventBridge with CloudTrail triggers tagging almost right after creation, well within the hour window.
A is wrong, D. Official AWS docs and hands-on labs make this clear, EventBridge matches CloudTrail for near-instant tagging.
D not C. C is tempting if you just see "within an hour" but real-time tagging is the key and EventBridge triggers are much faster.
Had something like this in a mock and D matched the requirement best. EventBridge reacts to CloudTrail events almost instantly so tagging happens as soon as EC2 is created, not just every hour like C. Pretty sure it's D but open if I missed anything.
Maybe D. CloudTrail events to EventBridge fire right after EC2 creation, so tagging happens fast and reliably. C seems like a trap here, hourly Lambda could cut it close or waste cycles. I think D is right but open to other views.
That tracks with the official practice questions I've seen, D is correct. EventBridge rules fed by CloudTrail hit that "within an hour" SLA much more reliably than hourly scans. Anyone checking for tagging compliance on exam, the AWS docs cover this combo pretty well.
Why does C keep coming up? Hourly polling feels risky for the 1-hour tag SLA-D listens in real time, right?
Be respectful. No spam.
Q: 9
A production account has a requirement that any Amazon EC2 instance that has been logged in to
manually must be terminated within 24 hours. All applications in the production account are using
Auto Scaling groups with the Amazon CloudWatch Logs agent configured.
How can this process be automated?
Options
Discussion
D , official guide and practice exams cover this automation flow.
D , since this is the only option that fully automates the required workflow with tagging on manual login and scheduled termination, no humans needed. The CloudWatch Logs subscription to Lambda makes it seamless. One minor edge case: if you had very specific retention/audit or approval requirements, A could be considered, but for pure automation D fits best. Correct me if I missed something.
D
Pretty sure B is a distractor since it depends on ops team action. D automates tagging and termination without involving people.
Had something like this in a mock, it's D. Direct CloudWatch Logs to Lambda, auto-tags, then terminates tagged instances daily. Fits the automation requirement exactly, no extra services needed. Pretty confident but open if anyone thinks otherwise.
D imo
Don't think B is right, because that relies too much on manual effort from ops which could easily exceed the 24 hour window. D automates the process end to end using CloudWatch Logs and Lambda, so it's a better fit here. Anyone else see A as a distractor?
B
D, not A, is better here. A uses Step Functions which feels like overkill for just tagging and terminating instances after logins. D goes straight from CloudWatch Logs to Lambda for automation with less complexity. Pretty sure D is how AWS guides recommend it unless extra orchestration is needed.
See this pattern in the official exam guide and labs. D
Be respectful. No spam.
Q: 10
A company is using AWS CodePipeline to automate its release pipeline. AWS CodeDeploy is being
used in the pipeline to deploy an application to Amazon Elastic Container Service (Amazon ECS) using
the blue/green deployment model. The company wants to implement scripts to test the green
version of the application before shifting traffic. These scripts will complete in 5 minutes or less. If
errors are discovered during these tests, the application must be rolled back.
Which strategy will meet these requirements?
Options
Discussion
Option C The AfterAllowTestTraffic hook in CodeDeploy runs tests after test traffic but before full production traffic, so errors found here can trigger rollback. Pretty sure that fits what the question's looking for.
C . AfterAllowTestTraffic lets you run tests on the green environment after test traffic but before shifting prod users, which matches what they're asking. D is a trap since AfterAllowTraffic happens after full cutover, so too late to catch errors before users hit it. Pretty sure this matches exam reports but open to corrections.
C. Saw nearly identical question during a practice test, AfterAllowTestTraffic is the right place for these checks.
C, AfterAllowTestTraffic is where you can test the green environment before any real user traffic. Pretty sure that's what the question wants.
Its C, AfterAllowTestTraffic lets you run tests before prod traffic hits. Not totally sure though.
C imo. AfterAllowTestTraffic is the CodeDeploy hook that runs *before* live production traffic is shifted, so it fits the requirement to test green first. D is a common trap but runs too late in the lifecycle. Anybody disagree?
I don’t think it’s D. C fits because AfterAllowTestTraffic is when you can test green before prod users. D happens too late.
C or D. Had something like this in a mock recently and picked D because AfterAllowTraffic seemed to fit the flow, since that's when CodeDeploy shifts live traffic and you can still react fast if issues pop up. But now I'm second guessing since maybe tests should happen before live cutover (so C is better). Could go either way honestly, curious what others think.
Maybe C here. D looks tempting but AfterAllowTraffic happens after production cutover, so can't catch errors in time for rollback.
Why not B here? The timing of the hook vs a pipeline stage is tricky.
Be respectful. No spam.
Question 1 of 20 · Page 1 / 2