Q: 7
Which technique involves prompting the Large Language Model (LLM) to emit intermediate
reasoning steps as part of its response?
Options
Discussion
C/B? I know C can trick you since it does break down problems, but B is the one that always outputs visible reasoning steps. Still, with these questions, easy to mix them up.
B, Chain-of-Thought makes the LLM show its reasoning in steps. Pretty sure that's what's being asked here, since options like Step-Back or In-Context don't specifically force the model to explain thinking. Agree?
Can someone clarify exactly what they mean by "intermediate reasoning steps" here? If it's just making the model show its thought process, isn't that usually called Chain-of-Thought (B)? But if they're talking about making the LLM solve subquestions first, wouldn't that lean toward C?
B
B
Be respectful. No spam.