Q: 9
You are working as a generative AI engineer and have developed a custom large language model (LLM)
optimized for a specific use case. You are tasked with deploying this model on the IBM Watsonx
platform. Which of the following steps is most essential to ensure the successful deployment of your
custom model, given that the model uses a third-party transformer architecture?
Options
Discussion
Option A. Not B, since scaling is useful after you actually get the model running. A is core, especially with third-party transformers.
Probably B
A tbh, but it's mostly from what I've seen in IBM docs and official guides. Always containerize if you're deploying something custom, especially with third-party libs. If anyone isn't sure, maybe check the Watsonx deployment labs or run through a practice exam for confirmation.
Totally agree with A. Containerizing is key since Watsonx expects you to bundle all your dependencies (especially with non-native transformers). If you just upload the model without this, stuff will break from missing packages. Guess you could optimize later with B, but for initial deployment, A is essential imo. Anyone disagree?
A for sure. Containerizing wraps up all those dependencies so Watsonx can actually run your custom LLM, even with third-party stuff. The other options don't make sense for the deployment step itself. Pretty confident here.
Its A, not B. Scaling (B) is nice after deployment but containerizing is required first to even get third-party transformers working. Agree?
A, had something like this in a mock. Always containerize if you're bringing in external dependencies, it's Watsonx best practice. Confident that's what they're looking for but correct me if anyone saw different.
Don't think it's C-Watsonx supports BYOM as long as dependencies are bundled, so A is the way. The others are more about scaling or data prep, but not strictly needed for deployment.
Not C, A
Be respectful. No spam.