1. AWS Documentation: The Amazon Bedrock User Guide explains that fine-tuning adapts a model for specific tasks or domains. It states
"Fine-tuning is the process of taking a pre-trained foundation model (FM) and further training it on your own dataset... to make it more specialized for your specific application."
Source: Amazon Bedrock User Guide
"Custom models
" section on "Fine-tuning."
2. University Courseware: Stanford University's course on Large Language Models distinguishes between in-context learning (prompting) and fine-tuning. Fine-tuning modifies the model's weights to specialize it
which is necessary when the task requires deep domain knowledge that cannot be conveyed in a few examples.
Source: Stanford University
CS324: Large Language Models
Winter 2022
Lecture 3: "Capabilities
" section on "Adaptation."
3. Academic Publication: A foundational paper on language models explains that fine-tuning is a critical step for adapting large pre-trained models to specific downstream tasks or domains
which significantly improves performance over using the base model alone.
Source: Devlin
J.
Chang
M. W.
Lee
K.
& Toutanova
K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Volume 1 (Long and Short Papers)
pp. 4171–4186. Section 4: "Experiments." (https://doi.org/10.18653/v1/N19-1423)