📖 About this Domain
This domain details the generative AI model development lifecycle on Google Cloud. It covers data preparation, foundation model selection, and the application of various tuning methodologies for task-specific adaptation.
🎓 What You Will Learn
- Learn data preparation and preprocessing techniques required for training and tuning large language models.
- Understand how to select appropriate foundation models from the Vertex AI Model Garden for specific business problems.
- Explore different model tuning methods, including prompt design, parameter-efficient fine-tuning (PEFT), and full fine-tuning.
- Grasp model evaluation methodologies, including automatic metrics and reinforcement learning from human feedback (RLHF).
🛠️ Skills You Will Build
- Implement data pipelines on Google Cloud to prepare and augment datasets for model tuning.
- Apply tuning techniques like LoRA and prompt engineering using Vertex AI tools to customize foundation models.
- Evaluate model outputs for quality, safety, and groundedness using Vertex AI Evaluation services.
- Manage the model development lifecycle, including experimentation, versioning, and registration in the Vertex AI Model Registry.
💡 Top Tips to Prepare
- Get hands-on practice in the Vertex AI Studio to experiment with prompt design and model tuning configurations.
- Understand the trade-offs between PEFT and full fine-tuning regarding cost, data requirements, and performance.
- Focus on data quality, bias detection, and mitigation strategies as part of Google's Responsible AI framework.
- Review the specific use cases for different evaluation metrics and the process of implementing RLHF for model alignment.
📖 About this Domain
This domain focuses on the MLOps lifecycle for generative AI models within Google Cloud. It covers the operationalization of foundation and tuned models, emphasizing deployment, monitoring, and governance in production environments.
🎓 What You Will Learn
- Learn to apply MLOps principles for the unique lifecycle management of large language models (LLMs).
- Learn to deploy models using Vertex AI endpoints, optimizing for latency, throughput, and cost.
- Learn to implement monitoring for model performance, data drift, and responsible AI metrics with Vertex AI Model Monitoring.
- Learn to build CI/CD pipelines for generative AI applications using Cloud Build and Vertex AI Pipelines.
🛠️ Skills You Will Build
- Build the skill to operationalize generative AI models into scalable, production-grade services on Google Cloud.
- Build the skill to design and implement automated MLOps pipelines for continuous training and deployment.
- Build the skill to establish robust monitoring systems to track model performance, reliability, and operational costs.
- Build the skill to integrate responsible AI principles and governance into the operational lifecycle of gen AI systems.
💡 Top Tips to Prepare
- Master Vertex AI Pipelines for constructing and managing automated workflows for model training and deployment.
- Study Vertex AI Endpoints, including public, private, and batch prediction options and their specific use cases.
- Understand Vertex AI Model Monitoring for detecting training-serving skew and prediction drift to maintain model quality.
- Review CI/CD integration using Cloud Build and Artifact Registry with Vertex AI for end-to-end MLOps.
📖 About this Domain
This domain covers the orchestration of generative AI models with external tools and the augmentation of their knowledge with external data sources. It focuses on building complex, stateful applications using frameworks like LangChain and techniques such as Retrieval-Augmented Generation (RAG). Key Google Cloud services include Vertex AI Search and Conversation for creating grounded, enterprise-ready solutions.
🎓 What You Will Learn
- You will learn to implement Retrieval-Augmented Generation (RAG) to ground model responses in factual, external data, reducing hallucinations.
- You will learn how to use orchestration frameworks like LangChain to build complex chains and agents that can interact with APIs and external tools.
- You will learn to leverage Vertex AI Search and Conversation to build enterprise-grade search and conversational AI applications.
- You will learn the function of vector databases in storing and retrieving embeddings for semantic search and RAG pipelines.
🛠️ Skills You Will Build
- You will build the skill to design and implement RAG pipelines using vector databases and document chunking strategies.
- You will build the skill to orchestrate multi-step workflows by chaining LLM calls with other components using frameworks like LangChain.
- You will build the skill to create agents that can reason and use external tools to accomplish complex tasks.
- You will build the skill to deploy and manage enterprise search solutions with Vertex AI Search for grounding generative AI applications.
💡 Top Tips to Prepare
- Focus on understanding the architecture of Retrieval-Augmented Generation (RAG) and its core components like vector embeddings and vector databases.
- Gain practical knowledge of LangChain concepts such as chains, agents, and tools for building complex generative AI applications.
- Review the capabilities of Vertex AI Search and Conversation for building grounded, enterprise-ready conversational agents.
- Understand the difference between orchestration and augmentation in the context of LLMs and how function calling enables tool use.
📖 About this Domain
This domain covers the methodologies for adapting pre-trained foundation models to specific enterprise use cases. It focuses on techniques like parameter-efficient fine-tuning (PEFT) and the critical process of evaluating model performance. You will explore how to align model outputs with desired business outcomes through rigorous testing and metric analysis.
🎓 What You Will Learn
- Differentiate between various model tuning techniques, including full fine-tuning and parameter-efficient fine-tuning (PEFT).
- Apply appropriate evaluation metrics, such as ROUGE and BLEU, to measure the performance of tuned generative models.
- Understand the process of Reinforcement Learning from Human Feedback (RLHF) for model alignment and safety.
- Execute a model tuning workflow using Vertex AI, from dataset preparation to deploying the tuned model endpoint.
🛠️ Skills You Will Build
- Select the optimal tuning method, such as LoRA or full fine-tuning, based on specific project constraints and objectives.
- Design and implement robust evaluation strategies to measure model quality, safety, and groundedness.
- Curate and structure high-quality datasets for supervised fine-tuning to improve model performance on downstream tasks.
- Integrate model tuning jobs into MLOps workflows on Vertex AI for continuous model improvement and deployment.
💡 Top Tips to Prepare
- Complete hands-on labs in Vertex AI Generative AI Studio to gain practical experience with supervised tuning jobs.
- Analyze the cost, performance, and training time trade-offs between PEFT methods and full fine-tuning.
- Deeply understand the application of specific evaluation metrics for text generation, summarization, and question-answering tasks.
- Focus on the conceptual workflow of RLHF, including reward model training and proximal policy optimization (PPO).
📖 About this Domain
This domain covers the fundamental principles of Generative AI, contrasting it with traditional machine learning. It establishes the core concepts of Large Language Models (LLMs) and foundation models as the building blocks for generative solutions.
🎓 What You Will Learn
- Differentiate between generative AI and discriminative AI models and their respective use cases.
- Understand the Transformer architecture, including attention mechanisms, which underpins modern LLMs.
- Identify various types of generative models, such as diffusion models for image generation and LLMs for text.
- Recognize the key stages in the lifecycle of a generative AI project, from ideation to model evaluation.
🛠️ Skills You Will Build
- Articulate the core capabilities of foundation models and their potential for business transformation.
- Identify appropriate generative AI use cases for specific business problems.
- Describe the importance of Google's Responsible AI principles in the context of LLM development.
- Explain the basic components of the generative AI technology stack on Google Cloud.
💡 Top Tips to Prepare
- Master the core differences between generative AI and traditional ML, focusing on model outputs.
- Study the key components of the Transformer architecture, as it is foundational to most LLMs.
- Review Google's official documentation on Responsible AI and its application to generative models.
- Complete the Google Cloud Skills Boost learning path on Generative AI fundamentals.
Premium Access Includes
- ✓ Quiz Simulator
- ✓ Exam Mode
- ✓ Progress Tracking
- ✓ Question Saving
- ✓ Flash Cards
- ✓ Drag & Drops
- ✓ 3 Months Access
- ✓ PDF Downloads