Q: 2
You work for a bank You have been asked to develop an ML model that will support loan application
decisions. You need to determine which Vertex Al services to include in the workflow You want to
track the model's training parameters and the metrics per training epoch. You plan to compare the
performance of each version of the model to determine the best model based on your chosen
metrics. Which Vertex Al services should you use?
Options
Discussion
Option C is right here. ML Metadata logs the artifacts, Experiments helps with model version comparisons, and TensorBoard shows metrics per epoch. Pretty sure this trio covers exactly what's needed for tracking and evaluation.
B or D here. I figure Pipelines helps with end-to-end workflow tracking and Experiments (in B) or TensorBoard (in D) let you compare model versions and metrics. Not totally sure if ML Metadata is really required for tracking training parameters, though. Anyone disagree?
I'm not fully sure here, but think it's C. These sound like the tools for tracking experiments and training details? Can someone confirm if that's correct?
Be respectful. No spam.