Q: 19
[Modeling]
A Machine Learning Specialist is building a model to predict future employment rates based on a
wide range of economic factors While exploring the data, the Specialist notices that the magnitude of
the input features vary greatly The Specialist does not want variables with a larger magnitude to
dominate the model
What should the Specialist do to prepare the data for model training'?
Options
Discussion
My pick: C Normalization is how you prevent features with large values from affecting the model more than others.
C not A. Normalization handles the magnitude issue best here, don't think quantile binning (A) does what the question asks.
I get your point about normalization, but I'd stick with A here.
C but this flips if the model isn’t sensitive to feature scale (like tree-based models), then normalization’s not strictly required.
C
C , normalization is the classic way to handle varying magnitudes in features so one variable doesn't overpower others. Having everything on the same scale makes a big difference. Anyone see a reason to pick B or D here?
B or D. Honestly, practice exams and reviewing the official AWS documentation on feature engineering might help clear this up.
Be respectful. No spam.