Q: 9
What does in-context learning in Large Language Models involve?
Options
Discussion
Don't think A or D fit here since pretraining and architecture changes aren't in-context at all. B is more about RLHF training stages. C is right because in-context learning just means the model uses prompt instructions or examples to shift its response, not retrain. Seen this phrasing on some practice sets too, but open to pushback if anyone's seen it worded differently.
B , since reinforcement learning does teach the model new behaviors based on feedback. Seems close to in-context learning, but maybe I’m missing something about the prompt-based part.
C , saw similar wording on a recent exam report. Fits the concept of using prompts for task adaptation.
C imo, but if "in-context" ever referred to model updates via prompt engineering tricks it might get weird.
C
C tbh, matches what I saw in the official guide and some practice tests.
Probably C here. In-context learning is about feeding the model instructions or demos as part of the prompt, not retraining it (so A and B are out). D is tempting but adding layers is just architecture stuff, doesn't relate to how LLMs adapt at inference time. Pretty sure it's C, unless I'm missing some edge case.
Tricky if you haven't seen the term, but it's about using prompts to guide the model so C.
Be respectful. No spam.