Q: 2
You work for a bank You have been asked to develop an ML model that will support loan application
decisions. You need to determine which Vertex Al services to include in the workflow You want to
track the model's training parameters and the metrics per training epoch. You plan to compare the
performance of each version of the model to determine the best model based on your chosen
metrics. Which Vertex Al services should you use?
Options
Discussion
Option C. had something like this in a mock and C covered the tracking and comparison parts.
Option C is right here. ML Metadata logs the artifacts, Experiments helps with model version comparisons, and TensorBoard shows metrics per epoch. Pretty sure this trio covers exactly what's needed for tracking and evaluation.
Yeah C is right. You want to track lineage and compare models, so ML Metadata and Experiments handle the tracking, while TensorBoard visualizes the metrics per epoch. Vizier (like in B) would only be needed if they explicitly wanted hyperparam tuning, which isn't mentioned here. Pretty sure about this-correct me if you see it differently.
Maybe C, fits because Metadata handles tracking, Experiments compares runs, and TensorBoard gives per-epoch metrics. B's Vizier is for tuning.
B , saw something similar in an exam report.
Wouldn't Vizier (in B) only be needed if we had to automate tuning? This question seems focused more on tracking and comparing, not actual hyperparameter search.
Probably C. Vizier (in B) is for hyperparameter tuning, but the question's focus is more about tracking parameters, metrics per epoch, and comparing runs-not active tuning. ML Metadata, Experiments, and TensorBoard together cover all those logging and comparison needs pretty directly. Open to other takes if anyone thinks Vizier fits better.
Why not B? Vizier would help with finding the best model, or does the question care more about logging runs than tuning?
Best tools for tracking training parameters and comparing model metrics are in C.
C tbh, pretty sure I saw a similar question in some exam dumps and C matches what they want for tracking and comparing model metrics. Makes sense if they're not asking about hyperparameter tuning.
Be respectful. No spam.