Q: 3
When is fine-tuning an appropriate method for customizing a Large Language Model (LLM)?
Options
Discussion
B . Fine-tuning's really for cases where prompt engineering just can't handle the amount of examples you need, or the LLM misses the mark even after heavy prompting. Saw similar wording on a practice set too. Not usually for latest data updates or when basic prompting works fine. Pretty confident but let me know if anyone's seen it worded differently.
Option A
B for sure
B is it. Fine-tuning really fits when prompt engineering doesn't scale, like if you have tons of data or the model just won't get the task right with prompts alone. It's not for when the model already knows the stuff or just needs recent data. Pretty sure on this but open to other takes if someone disagrees.
Its B, saw a similar question on practice exams. Fine-tuning is for when prompt engineering isn't enough.
Be respectful. No spam.