Can someone double check if Aurora Global Database (option C) really supports active-active writes across regions? I thought only DynamoDB Global Tables (D) let you do true multi-region active-active writes, which matters for this gaming scenario. Maybe I'm missing something in how Aurora handles replication?
Yeah, D makes sense because NACLs can explicitly block outbound traffic from Application A's subnet to B. Security groups don't support deny rules for outgoing traffic, just allow. I think this fits the ask but let me know if you see it differently.
Is there a reason everyone’s skipping A? I know security groups can’t do an explicit deny, but just want to double check I’m not missing a trick. NACLs look right here but AWS changes stuff sometimes.
This looks like D, since you want to prevent Application A from sending anything to B, and NACLs handle outbound subnet traffic with explicit denies. Security groups can’t block outbound that way. I think that covers the requirement but not totally sure if there’s a catch with how traffic could route inside the VPC. Anyone disagree?
Tricky part here is the encryption for cross-Region transfers, which rules out D since snapshot copies aren't encrypted in-transit by default. So B is the only one that actually meets all requirements without manual setup or extra tooling. If encryption wasn't called out, maybe D could work, but not for this specific scenario I think. Disagree?
I don't think A, C or D match the "least operational effort" part. B uses global data store which handles cross-Region encrypted replication automatically, so you avoid all the manual DMS or snapshot work. Saw a similar question in practice sets. Only thing to watch is that D might trick folks if you miss the encryption requirement!
Had something like this in a mock and pretty sure C is the way to go. Using DynamoDB TTL with a Lambda function just to set the expiration date keeps things mostly serverless, and AWS handles the deletion afterward. No need for custom stream handling or extra services. Only possible caveat is TTL isn't instant, but "within one month" seems fine for most use cases. Anyone see a risk with this approach?
Why does AWS have to make these TTL and Lambda combos so confusing? I don’t think it’s C, B makes more sense to me. With a DynamoDB stream you can handle the data as soon as TTL is hit by piping straight into Lambda-feels more real time and less likely to miss an edge case if DynamoDB’s background process doesn’t catch something. Unless I'm missing a hidden overhead here… open to correction.
I’m going with A. Edge-optimized gets you CloudFront, so traffic from anywhere hits the closest edge which means lower latency for global users. Caching and content encoding are nice bonuses but only matter if the data is distributed closer to users. Pretty sure this is the textbook AWS way for reducing latency across regions, but I could be missing a detail here. Anyone see another angle?
Pretty sure A makes the most sense if you're trying to cut latency for users globally. Edge-optimized endpoints use CloudFront, so requests hit the nearest edge location and that speeds things up a lot. Caching plus compression with content encoding helps too. Reserved concurrency only helps backend Lambda scaling, not network latency, so I think A covers all the key points for this scenario. Open to hearing other angles though.
If the files change often or need quick cache invalidation, would ElastiCache (C/D) actually be better than CloudFront (B) for this global setup?
Actually pretty sure C is right here. Predictive scaling learns the workload pattern and that way you don't have to maintain or analyze schedules yourself. Option B is easy but still manual config, so more effort long run.