1. Amazon Bedrock User Guide - Amazon Titan models: "Amazon Titan Text Embeddings G1 - The Titan Text Embeddings model translates text inputs (words, phrases, or large units of text) into a numerical representation (known as embeddings) that can be used to power use cases like search, personalization, and clustering based on semantic similarity." (Source: AWS Documentation, Amazon Bedrock User Guide, Section: "Amazon Titan models").
2. Amazon Bedrock User Guide - Knowledge bases for Amazon Bedrock: "When you create a knowledge base, Amazon Bedrock converts your documents into embeddings... and stores the embeddings in your vector database... You can choose from a variety of vector databases to store the vectors for your knowledge base, including Amazon OpenSearch Service Serverless..." This document explicitly outlines the architecture described in the correct answer. (Source: AWS Documentation, Amazon Bedrock User Guide, Section: "Knowledge bases for Amazon Bedrock").
3. Amazon Kendra Developer Guide - Integrating Amazon Bedrock with an Amazon Kendra index: This guide shows how Kendra can be used as a data source for RAG, confirming it's a valid but higher-level solution. It states, "You can use an Amazon Kendra index as a data source for your knowledge base to build a solution with the retrieval augmented generation (RAG) model." (Source: AWS Documentation, Amazon Kendra Developer Guide, Section: "Integrating Amazon Bedrock with an Amazon Kendra index").
4. Amazon SageMaker Developer Guide - Prepare ML Data with Amazon SageMaker Data Wrangler: "You can use SageMaker Data Wrangler to simplify the process of data preparation and feature engineering..." This confirms its purpose is distinct from generating semantic embeddings for RAG. (Source: AWS Documentation, Amazon SageMaker Developer Guide, Section: "Prepare ML Data with Amazon SageMaker Data Wrangler").