Definitely learning rate here, so C. That parameter pretty much decides if prompt tuning actually helps the model learn to generate sensible, context-aware responses. Lab work and IBM docs both focus on tuning learning rate for this reason. Not 100 percent but that's how I've seen it explained in official guides-anyone see something different on real exams?
Don’t think it's C, since quantization doesn’t guarantee zero accuracy loss and sometimes post-quant fine-tuning is still needed. D captures the real trade-off: you shrink the model but risk accuracy hits if you go too far. Saw a similar question in practice sets-C feels like a trap.
Does the question specify if there's access to a task-specific dataset? If not, then B could make sense for resource constraints, but if domain adaptation matters most then the answer would flip to D.
Official study guide and Watsonx docs both point to B and C. QAT (B) helps keep accuracy, and quantization (C) is mostly about saving memory and speed. Seen similar phrasing on older IBM practice sets, pretty sure these are the best two but open if someone found an edge case.
Yeah, B and C make the most sense here. QAT (B) helps keep accuracy closer to the original, and C is all about getting better memory usage and speed with quantization. Not 100% on edge cases but these fit best.
Option A here. The trap is D, but one detailed prompt can’t replace real diversity across tasks and complexity. Diversity in training data is always key for solid generalization, at least from what I’ve seen in practice. Agree?
Does anyone actually see a benefit in picking D? I get the idea of one detailed prompt, but in practice, variety trumps depth when it comes to generalizing across tasks. Using just one scenario seems way too limited for prompt tuning.
Totally agree with A. Containerizing is key since Watsonx expects you to bundle all your dependencies (especially with non-native transformers). If you just upload the model without this, stuff will break from missing packages. Guess you could optimize later with B, but for initial deployment, A is essential imo. Anyone disagree?
Pretty standard prompt tuning logic here-C and D line up with what the official guide and IBM learning docs recommend. If you see high variance, try cleaning up your prompts and use more data. Anyone else see this in practice exams?