Q: 11
A health insurance company stores personally identifiable information (PII) in an Amazon S3 bucket.
The company uses server-side encryption with S3 managed encryption keys (SSE-S3) to encrypt the
objects. According to a new requirement, all current and future objects in the S3 bucket must be
encrypted by keys that the company’s security team manages. The S3 bucket does not have
versioning enabled.
Which solution will meet these requirements?
Options
Discussion
B tbh. Changing to SSE-KMS with a customer-managed key plus re-upload covers the 'all current and future objects' part. Denying unencrypted uploads helps too. Pretty sure that's what they're looking for.
Probably B. Matches what I've seen in similar questions, clear on covering both current and future objects with company-managed keys.
Be respectful. No spam.
Q: 12
A company has an application that uses AWS Key Management Service (AWS KMS) to encrypt and
decrypt dat
a. The application stores data in an Amazon S3 bucket in an AWS Region. Company security policies
require that the data is encryptedbeforebeing uploaded to S3, and decryptedwhen read. The S3
bucket isreplicated to other AWS Regions.
A solutions architect must design a solution so that the application canencrypt and decrypt data
across Regionsusingthe same key.
Options:
Options
Discussion
D . Only multi-Region KMS keys (option A) let you encrypt in one region and decrypt in another using the same logical key. The other options don’t meet that cross-region requirement. If someone knows a better workaround, let me know.
Option A (saw similar on exam reports)
Be respectful. No spam.
Q: 13
A company has deployed its database on an Amazon RDS for MySQL DB instance in the us-east-1
Region. The company needs to make its data available to customers in Europe. The customers in
Europe must have access to the same data as customers in the United States (US) and will not
tolerate high application latency or stale dat
a. The customers in Europe and the customers in the USneed to write to the database. Both groups
of customers need to see updates from the other group in real time.
Which solution will meet these requirements?
Options
Discussion
A or D? Both mention Aurora and write forwarding, but only A covers the migration steps for RDS to Aurora first, which matches what a similar exam question required. Super clear options for a tricky scenario.
Be respectful. No spam.
Q: 14
A company needs to move some on-premises Oracle databases to AWS. The company has chosen to
keep some of the databases on premises for business compliance reasons. The on-premises
databases contain spatial data and run cron jobs for maintenance. The company needs to connect to
the on-premises systems directly from AWS to query data as a foreign table. Which solution will meet
these requirements?
Options
Discussion
No comments yet. Be the first to comment.
Be respectful. No spam.
Q: 15
A company needs to use an AWS Transfer Family SFTP-enabled server with an Amazon S3 bucket to
receive updates from a third-party data supplier. The data is encrypted with Pretty Good Privacy
(PGP) encryption The company needs a solution that will automatically decrypt the data after the
company receives the data
A solutions architect will use a Transfer Family managed workflow The company has created an 1AM
service role by using an 1AM policy that allows access to AWS Secrets Manager and the S3 bucket
The role's trust relationship allows the transfer amazonaws com service to assume the rote
What should the solutions architect do next to complete the solution for automatic decryption'?
Options
Discussion
A is wrong, C. You need the PGP private key for decryption, not public, and the nominal step is for normal processing not error handling. This fits with how managed workflows are set up in Transfer Family. Makes sense based on similar questions I've seen.
Be respectful. No spam.
Q: 16
A company hosts its primary API on AWS using Amazon API Gateway and AWS Lambda functions.
Internal applications and external customers use this API. Some customers also use a legacy API
hosted on a standalone EC2 instance.
The company wants to increase security across all APIs to prevent denial of service (DoS) attacks,
check for vulnerabilities, and guard against common exploits.
What should a solutions architect do to meet these requirements?
Options
Discussion
Its C, saw a similar question in practice where WAF goes with API Gateway, Inspector for EC2, GuardDuty just monitors.
Be respectful. No spam.
Q: 17
A company is using AWS CodePipeline for the CI/CD of an application to an Amazon EC2 Auto Scaling
group. All AWS resources are defined in AWS
CloudFormation templates. The application artifacts are stored in an Amazon S3 bucket and deployed
to the Auto Scaling group using instance user data scripts.
As the application has become more complex, recent resource changes in the CloudFormation
templates have caused unplanned downtime.
How should a solutions architect improve the CI/CD pipeline to reduce the likelihood that changes in
the templates will cause downtime?
Options
Discussion
B is the stronger answer since it brings in CloudFormation change sets for safe previews plus blue/green deployment with CodeDeploy, which really minimizes downtime. Automated testing with CodeBuild helps catch stuff early too. Pretty sure that's what AWS wants here, but let me know if someone thinks otherwise.
C
I think C looks good because it adds in validation steps using the IDE and CLI checks, so errors in CloudFormation could be caught early. Manual test plan before production is pretty common too. The trap is not catching that B's blue/green is more robust, but manual checks seem safer for complex apps.
Does the question specify if manual approval is required before production, or just automation? If manual approval is mandatory, C might fit better, but if not, B makes more sense.
Be respectful. No spam.
Q: 18
A company in the United States (US) has acquired a company in Europe. Both companies use the
AWS Cloud. The US company has built a new application with a microservices architecture. The US
company is hosting the application across five VPCs in the us-east-2 Region. The application must be
able to access resources in one VPC in the eu-west-1 Region. However, the application must not be
able to access any other VPCs. The VPCs in both Regions have no overlapping CIDR ranges. All
accounts are already consolidated in one organization in AWS Organizations. Which solution will
meet these requirements MOST cost-effectively?
Options
Discussion
These inter-region questions are so picky about cost vs scale. Probably D fits best, since using VPC peering is the cheapest way to link each us-east-2 VPC directly to that single eu-west-1 VPC. Not 100% sure since transit gateway pops up a lot in AWS practice, but D matches the limited access requirement.
Anyone use the official guide or AWS whitepapers for networking scenarios like this? Practice exams seem to hit these peering vs transit gateway questions a lot.
It's D. If the requirement changed and the app needed to access more than just one VPC in eu-west-1, would Transit Gateway (option B) make more sense even with higher costs?
Be respectful. No spam.
Q: 19
A company is planning to migrate an Amazon RDS for Oracle database to an RDS for PostgreSQL DB
instance in another AWS account. A solutions architect needs to design a migration strategy that will
require no downtime and that will minimize the amount of time necessary to complete the
migration. The migration strategy must replicate all existing data and any new data that is created
during the migration The target database must be identical to the source database at completion of
the migration process
All applications currently use an Amazon Route 53 CNAME record as their endpoint for
communication with the RDS for Oracle DB instance The RDS for Oracle DB instance is in a private
subnet.
Which combination of steps should the solutions architect take to meet these requirements? (Select
THREE)
Options
Discussion
A/C/E? Usually A covers schema migration, C is about VPC peering for secure cross-account DB traffic, and E handles the full+CDC using DMS to minimize downtime. Just watch out: if the source DB had been public, D might work but that's not the case here. See this type pop up a lot in practice questions.
C vs D. I think D works if you open up the source, but it's not usually recommended in prod.
A C, E. Schema conversion (A), VPC peering for network (C), and DMS with full load plus CDC (E) is the usual path for cross-account minimal downtime migrations. Pretty sure that's right for this scenario.
Be respectful. No spam.
Q: 20
A company is building a hybrid environment that includes servers in an on-premises data center and
in the AWS Cloud. The company has deployed Amazon EC2 instances in three VPCs. Each VPC is in a
different AWS Region. The company has established an AWS Direct Connect connection to the data
center from the Region that is closest to the data center.
The company needs the servers in the on-premises data center to have access to the EC2 instances in
all three VPCs. The servers in the on-premises data center also must have access to AWS public
services.
Which combination of steps will meet these requirements with the LEAST cost? (Select TWO.)
Options
Discussion
This question's wording is clear, thanks! I think A and B make sense since the Direct Connect gateway lets you reach VPCs in other regions using one connection, helping keep costs down. B points specifically to connecting those extra VPCs through the gateway. Not totally sure if that's enough for access to public AWS services though, so open to corrections.
Be respectful. No spam.
Question 11 of 20 · Page 2 / 2