Definitely learning rate here, so C. That parameter pretty much decides if prompt tuning actually helps the model learn to generate sensible, context-aware responses. Lab work and IBM docs both focus on tuning learning rate for this reason. Not 100 percent but that's how I've seen it explained in official guides-anyone see something different on real exams?
For me, D for this. Had something like this in a mock where high recall with low precision meant the summaries included lots of unnecessary info, not missing key points. You'd want to focus on boosting precision so the output is more relevant. Pretty sure that's how IBM frames it too, but open to other thoughts if someone sees it different.
Don’t think it's C, since quantization doesn’t guarantee zero accuracy loss and sometimes post-quant fine-tuning is still needed. D captures the real trade-off: you shrink the model but risk accuracy hits if you go too far. Saw a similar question in practice sets-C feels like a trap.
Does the question specify if there's access to a task-specific dataset? If not, then B could make sense for resource constraints, but if domain adaptation matters most then the answer would flip to D.
Official study guide and Watsonx docs both point to B and C. QAT (B) helps keep accuracy, and quantization (C) is mostly about saving memory and speed. Seen similar phrasing on older IBM practice sets, pretty sure these are the best two but open if someone found an edge case.
B tbh, C is also right. B nails how quantization-aware training (QAT) reduces accuracy loss, which you don't get with just post-training quantization. C covers resource efficiency and is what actually happens in most setups (smaller models, similar accuracy). A feels like a trap since 8-bit quantization doesn't always wreck performance. Unless there's some super-specific edge case, B and C are the best picks.
Yeah, B and C make the most sense here. QAT (B) helps keep accuracy closer to the original, and C is all about getting better memory usage and speed with quantization. Not 100% on edge cases but these fit best.
Option A here. The trap is D, but one detailed prompt can’t replace real diversity across tasks and complexity. Diversity in training data is always key for solid generalization, at least from what I’ve seen in practice. Agree?
Pretty sure it's A. Practice tests and the Watsonx docs both push for diverse prompts spanning multiple domains to get better generalization, not just covering a single use case or repeating patterns. Always saw this idea pop up in the official prep material too. Someone correct me if that's off.
Does anyone actually see a benefit in picking D? I get the idea of one detailed prompt, but in practice, variety trumps depth when it comes to generalizing across tasks. Using just one scenario seems way too limited for prompt tuning.
Totally agree with A. Containerizing is key since Watsonx expects you to bundle all your dependencies (especially with non-native transformers). If you just upload the model without this, stuff will break from missing packages. Guess you could optimize later with B, but for initial deployment, A is essential imo. Anyone disagree?
Pretty standard prompt tuning logic here-C and D line up with what the official guide and IBM learning docs recommend. If you see high variance, try cleaning up your prompts and use more data. Anyone else see this in practice exams?