1. Official Vendor Documentation: AWS Documentation
"Knowledge bases for Amazon Bedrock." The documentation states
"Knowledge Bases for Amazon Bedrock is a fully managed capability that helps you implement the entire RAG workflow... without having to build custom integrations to data sources." This directly describes the solution in option D.
2. Official Vendor Documentation: AWS Machine Learning Blog
"Choose the right model customization method in Amazon Bedrock." In the section comparing RAG and fine-tuning
it notes
"RAG can be more cost-effective than fine-tuning
especially for models that are used infrequently... RAG is a good choice when your model needs to access rapidly changing information." This supports the cost-effectiveness of the RAG approach (implemented by Knowledge Bases) over fine-tuning (Option C).
3. Peer-Reviewed Academic Publication: Lewis
P.
et al. (2020). "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks." Advances in Neural Information Processing Systems 33
Section 1
Paragraph 2. This foundational paper introduces the RAG framework
which combines a pre-trained retriever with a pre-trained generator. The Knowledge Base in Bedrock is a managed implementation of this exact
efficient pattern. (Available via arXiv:2005.11401)
4. University Courseware: Stanford University
CS224N - NLP with Deep Learning
Winter 2023
Lecture 17
"Retrieval and Question Answering." The lecture materials discuss RAG as a state-of-the-art method for open-domain question answering
highlighting its ability to ground models in external knowledge without costly retraining
which aligns with the cost-effectiveness requirement.