Q: 5
A Generative Al Engineer has already trained an LLM on Databricks and it is now ready to be
deployed.
Which of the following steps correctly outlines the easiest process for deploying a model on
Databricks?
Options
Discussion
B I've seen similar steps outlined in official Databricks guides and some practice tests. Logging with MLflow and registering to Unity Catalog is the streamlined method they push for production serving. Someone correct me if recent exam updates changed this.
Had something like this in a mock. B is the way-MLflow logging, straight to Unity Catalog, then start the endpoint. It's the native flow they want for Databricks deployments. Anybody disagree?
Nah, I don't think A is the right approach here. B is what Databricks workflow expects, since logging with MLflow and then registering to Unity Catalog makes serving way more streamlined. Uploading a pickle (A) skips key integration steps and can cause headaches later. See a lot of people tripped up by that trap option.
B but only if the model was logged with MLflow in the first place. If you didn't use MLflow tracking during training, A could technically work, though less integrated. Seen some confusion on this in practice exams.
A seems like the shortest path since uploading a pickle to Unity Catalog and then registering feels direct, no extra MLflow step. Pretty sure that's what Databricks docs say too, but open if I'm missing something.
Likely A on this one since uploading to Unity Catalog then registering feels simple, not sure why everyone skips it.
I always see folks pick B, but I thought A could work too since you're uploading to Unity Catalog and starting the endpoint. Maybe I'm missing something about how MLflow logs vs. pickle objects?
Why not C? Docker is more work than the managed MLflow/Unity Catalog flow Databricks pushes, so B fits best here.
Its B, Flask (D) looks easy but isn't native for Databricks serving, which wants MLflow and Unity Catalog.
Probably B here. You log with MLflow, register to Unity Catalog, then use the serving endpoint. A and D add extra manual steps and C is more for custom containers, not the Databricks built-ins. Pretty sure B is what exam wants.
Be respectful. No spam.