LLM Engineering with RAG: Optimizing AI Solutions
In this course, you’ll learn how to integrate enterprise data with advanced large language models (LLMs) using Retrieval-Augmented Generation (RAG) techniques.
Segment 00: Welcome to the Course: Course Overview
Segment 01: Introduction to the Course & Meet Your Instructor
Segment 02: Foundations of LLMs and Introduction to RAG: Revolutionizing AI Solutions
Segment 03: Reading - History and Evolution of LLMs
Segment 04: Quick Start: Setting Up Your Environment for LLM Development
Segment 05: Managing Context Windows
Segment 06: Hands On Learning (HOL): Exploring LLM Integration in Real-World Applications
Segment 07: RAG Component Breakdown
Segment 08: Implementing Vector Search with FAISS in RAG Projects
Segment 09: Tuning RAG for Optimization
Segment 10: Reading - The Practical Applications of Retrieval-Augmented Generation in AI
Segment 11: Hands On Learning (HOL): Implementing RAG
Segment 12: Data Integration Strategies
Segment 13: Building LLM Apps
Segment 14: Deploying LLM Apps
Segment 15: Deploying LLM Apps with FastAPI on Hugging Face
Segment 16: Prompt Engineering
Segment 17: Hands On Learning (HOL): Deploying Workflow Project
Segment 18: Workflow Scaling and Security
Segment 19: LLMOps: Tools, Platforms & Best Practices for Managing LLM Lifecycle
Segment 20: Congratulations and Continuous Learning Journey
Segment 21: Project: Exploring LLM Workflows
Segment 22: Quiz: LLM Engineering with RAG: Optimizing AI Solutions