Q: 10
When analyzing the results of a prompt tuning experiment, which two of the following actions are most
appropriate if you observe a consistently high variance in model predictions across different prompt
templates? (Select two)
Options
Discussion
Had something like this in a mock. C and D fit best-standardizing prompt structure (C) helps make outputs consistent, and more samples (D) reduce randomness across runs. Pretty sure that's how IBM wants us to think here, but open to other views!
Option B (increase batch size) and C. I feel like tweaking the batch size helps reduce variance sometimes, right?
Seen similar advice in the official guide and practice questions, looks like C and D fit best here.
Not B, C and D are correct. Standardizing prompt templates (C) cuts down unpredictable wording issues, while more training samples (D) smooth out randomness between runs. Pretty sure about this, matches what I saw in similar exam reports.
Pretty standard prompt tuning logic here-C and D line up with what the official guide and IBM learning docs recommend. If you see high variance, try cleaning up your prompts and use more data. Anyone else see this in practice exams?
C/D? Official guide and IBM docs mention template tuning and more data for this scenario.
C and D imo
Be respectful. No spam.