Q: 10
When analyzing the results of a prompt tuning experiment, which two of the following actions are most
appropriate if you observe a consistently high variance in model predictions across different prompt
templates? (Select two)
Options
Discussion
I figured B and C. Upping batch size (B) usually helps stabilize results in my other ML work, and tweaking prompt structure (C) just makes sense for consistency. But not totally sure if B really hits the cause here, open to argument.
Had something like this in a mock. C and D fit best-standardizing prompt structure (C) helps make outputs consistent, and more samples (D) reduce randomness across runs. Pretty sure that's how IBM wants us to think here, but open to other views!
Option B (increase batch size) and C. I feel like tweaking the batch size helps reduce variance sometimes, right?
Option C and D. I've seen similar asked in IBM exam guides and official labs, so that's my best guess.
Its C and D, not B. Batch size changes are about gradient stability, not output variance from prompt structure. Seen similar trap in other practice sets.
Seen similar advice in the official guide and practice questions, looks like C and D fit best here.
Not B, C and D are correct. Standardizing prompt templates (C) cuts down unpredictable wording issues, while more training samples (D) smooth out randomness between runs. Pretty sure about this, matches what I saw in similar exam reports.
Pretty standard prompt tuning logic here-C and D line up with what the official guide and IBM learning docs recommend. If you see high variance, try cleaning up your prompts and use more data. Anyone else see this in practice exams?
C/D? Official guide and IBM docs mention template tuning and more data for this scenario.
C and D imo
Be respectful. No spam.