Q: 15
You developed a custom model by using Vertex Al to predict your application's user churn rate You
are using Vertex Al Model Monitoring for skew detection The training data stored in BigQuery
contains two sets of features - demographic and behavioral You later discover that two separate
models trained on each set perform better than the original model
You need to configure a new model mentioning pipeline that splits traffic among the two models You
want to use the same prediction-sampling-rate and monitoring-frequency for each model You also
want to minimize management effort What should you do?
Options
Discussion
Makes sense to go with D since deploying both models to the same endpoint cuts down on management overhead. Using separate training tables fits how the models need to be trained, and the monitoring-config-from param covers model IDs. Pretty sure that's what Google expects here. Agree?
Its D, since both models can share an endpoint and a single monitoring job if you use the right monitoring-config settings. This fits what I've seen in exam reports and official doc reviews. Splitting up the data into separate tables is key for proper training. Official documentation and Google's own sample pipelines are worth checking for details, but this matches the typical best-practice approach.
If the "minimize management effort" part wasn't required, would C make more sense than D here?
Be respectful. No spam.