1. Oracle Cloud Infrastructure (OCI) Documentation: In the documentation for the Generative AI service, it specifies when to use custom models. "Fine-tuning can improve on few-shot learning by training the model on many more examples than can fit in a prompt, making fine-tuning a good option for new tasks that the model wasn't originally trained on."
Source: Oracle Cloud Infrastructure Documentation, "AI and Machine Learning > Generative AI > Custom Models", Section: "When to Use a Custom Model".
2. Stanford University Courseware: Lecture materials on LLM adaptation explain that fine-tuning is used to specialize a model for a target task or data distribution that is not well-handled by the base model, especially when in-context learning is insufficient.
Source: Stanford CS224N: NLP with Deep Learning, Winter 2023, Lecture 12: "Prompting, Instruction Finetuning", Slide 54, "Why instruction finetuning?".
3. Academic Publication: A foundational paper on instruction tuning highlights that this form of fine-tuning significantly improves zero-shot performance on unseen tasks, demonstrating its value when a model needs to learn how to perform new tasks it struggles with initially.
Source: Wei, J., Bosma, M., et al. (2021). "Finetuned Language Models Are Zero-Shot Learners". Section 3.1, "Method". (Available on arXiv:2109.01652). This paper discusses the principle of fine-tuning to improve task performance where the base model is lacking.