คอร์ส AI RAG น่าสนใจ

From RAG novice to building production-ready AI systems.

In this course, you'll gain the practical skills to design, build, and prototype Retrieval-Augmented Generation (RAG) systems that can scale for real-world applications. Starting with core concepts, you'll work through hands-on projects, mastering techniques to combine search and AI models for powerful, efficient systems. By the end, you'll be equipped to confidently deploy RAG systems in production environments, solving real business problems with cutting-edge AI.

What to expect in the course:

Week 1
Module 1: Introduction to Retrieval-Augmented Generation
-----------------------------------------------------------------------------------------
✅ Welcome - Introduction to this course
~ Course Overview - What to Expect
~ Tools and techstack - eg. Python, LLM Providers, NBDev, and Qdrant
~ Prerequisites - Python Proficiency, Machine Learning Basics (JavaScript Helpful but Optional)
✅ RAG Systems Overview - Setting the Stage
~ Defining RAG - The Core Idea Behind the System
~ Why RAG? - The Power of Combining Retrieval and Generation
~ Conceptual Flow of RAG Systems
✅ Historical Evolution of RAG Systems
~ Development of Generative Models - A Brief History
~ Rise of Retrieval-Augmented Systems - How We Got Here
## QA-Retriever-Reader vs.
## QA-Retriever-Generator
✅ RAG System Architecture - Key Concepts and Components
✅ When to Apply RAG - Key Use Cases Explored
✅ Challenges and Limitations of RAG Systems

Module 2: Architecture and Components of RAG Systems
-----------------------------------------------------------------------------------------
✅ Understanding the RAG Pipeline - An Overview
✅ Data Flow in RAG Systems - How Information Moves
✅ The Retriever - The Heart of RAG Systems
~ Dense vs. Sparse Retrieval - Key Differences Explained (e.g., BM25, DPR, etc.)
~ Exploring Retrieval Methods for RAG Pipelines
~ Knowledge Stores - Understanding Vector Databases
✅ The Generator Component - Creating Responses
~ Generation Models Overview - Powering the Generator
~ Input Representation - Preparing Data for Generation
✅ Bringing It All Together - The Full RAG Workflow
✅ Hands-On Practice - Setting Up a Basic RAG Pipeline

Module 3: Preparing and Ingesting Data for RAG Systems
-----------------------------------------------------------------------------------------
✅ Parsing Raw Documents: The First Step to Understanding
✅ Extracting Key Metadata from Documents
✅ Document Chunking: Structuring Data for Retrieval
✅ Embedding Document Chunks for Efficient Search (Transformers, OpenAI, Jina, Nomic, etc.)
✅ Indexing Document Embeddings in a Vector Database

Week 2
Module 4: Building a Complete RAG Pipeline
-----------------------------------------------------------------------------------------
✅ Implementing RAG Using Popular Frameworks
~ RAG with LlamaIndex
~ RAG with LangChain
~ RAG with Haystack
✅ Building RAG from Scratch: A Comparison with Frameworks
✅ Advanced RAG Techniques
~ Enhancing Retrieval with Query Expansion & Rewriting
~ Optimizing Results through Query Re-Routing
~ Improving Accuracy with Re-Ranking Strategies
~ Boosting Efficiency through Caching
~ Refining Retrieval Over Time by harnessing Feedback Loops
~ Exploring Other Advanced Retrieval Techniques (Raptor, Agentic RAG, Corrective RAG, HyDE etc.)
✅ Integrating RAG: Front-End and Back-End Development
✅ Hands-On Project: Build a Research Paper Chatbot from Scratch
🌶️ Bonus: Scaling RAG Systems

Module 5: Evaluation and Fine-tuning
-----------------------------------------------------------------------------------------
✅ Evaluating RAG Systems: Metrics and Methods (Precision, recall, AP@k, quality, response time, MRR, NDCG, hit rate, human evaluation)
✅ Optimizing LLMs for Enhanced Accuracy
✅ When and How to Fine-Tune Your RAG Pipeline

Module 6: Observability and Cost
-----------------------------------------------------------------------------------------
✅ Common Issues in LLM Applications (Low recall, noisy results)
✅ Understanding Observability in LLM Applications (Logging, user feedback etc.)
✅ LLM Monitoring vs. Observability: Key Differences (Latency, error rate, user engagement)

✅ Practical Guide: Setting Up Observability for LLM Applications