Q: 3
You are reviewing the results of a prompt-tuning experiment where the goal was to improve an LLM's
ability to summarize technical documentation. Upon inspecting the experiment results, you notice that
the model has a high recall but relatively low precision. What does this likely indicate about the model’s
performance, and how should you approach further tuning?
Options
Discussion
D tbh
For me, D for this. Had something like this in a mock where high recall with low precision meant the summaries included lots of unnecessary info, not missing key points. You'd want to focus on boosting precision so the output is more relevant. Pretty sure that's how IBM frames it too, but open to other thoughts if someone sees it different.
I’d say D, IBM official guide and practice exams mention tuning for higher precision if the model returns too many irrelevant bits.
Hard to say, A here. Usually high recall hints the model catches most details, so precision might not be the main problem.
D, official docs and IBM sample labs talk about tuning precision in cases like this. Seen similar advice on practice exams, too.
Had something like this in a mock, pick is D.
Be respectful. No spam.