Actually pretty sure C is right here. Predictive scaling learns the workload pattern and that way you don't have to maintain or analyze schedules yourself. Option B is easy but still manual config, so more effort long run.
Q: 10
A transaction processing company has weekly scripted batch jobs that run on Amazon EC2 instances.
The EC2 instances are in an Auto Scaling group. The number of transactions can vary, but the baseline
CPU utilization that is noted on each run is at least 60%. The company needs to provision the
capacity 30 minutes before the jobs run.
Currently, engineers complete this task by manually modifying the Auto Scaling group parameters.
The company does not have the resources to analyze the required capacity trends for the Auto
Scaling group counts. The company needs an automated way to modify the Auto Scaling group's
desired capacity.
Which solution will meet these requirements with the LEAST operational overhead?
Options
Discussion
Option C for sure. Predictive scaling is best here since it adapts automatically to changing loads and timing, especially with weekly jobs. B looks tempting but it's a common trap for fixed schedules, while C truly minimizes ongoing effort. Open to other thoughts but pretty sure this matches AWS best practices.
C , AWS is obsessed with predictive scaling on these recurring workload questions lately. Feels like that's what they want here.
C imo. Predictive scaling can forecast based on past CPU usage and handles the pre-launch automatically, so nobody needs to fiddle with configs every week. B is easy if the schedule never changes, but C fits the 'least overhead' requirement better I think. Anybody see a risk with predictive here?
Feels like B here. Scheduled scaling seems simpler if the job timing is always known, and it just adjusts capacity automatically each week. I think predictive scaling (C) is more complex than needed unless the schedule changes often. Pretty sure B would work fine for basic weekly jobs, but happy to hear other thoughts.
Its C, but is there a clear requirement for jobs always running at the same time every week? If the schedule changed often, maybe B would make more sense. Predictive scaling works best with consistent, repetitive workloads.
Not B here, it's C. Predictive scaling handles recurring batch jobs automatically and figures out capacity based on past trends, so you don't have to tweak schedules or analyze usage. B's a trap if load varies much, I think.
Seriously AWS loves to throw these predictive scaling vs scheduled scaling situations. I think C is right since predictive scaling handles recurring patterns and automatically figures out timing, but not totally ruling out B for super rigid jobs. Agree/disagree?
C makes sense since predictive scaling can automate the pre-provisioning without needing manual work. Uses ML to figure out trends so you don't have to update schedules or analyze metrics. Pretty sure that's what AWS recommends for pattern workloads, but let me know if I'm missing something.
C , but only if the job timing sometimes shifts. If it's always same time, B could work too.
Be respectful. No spam.
Question 10 of 35