Building Intelligent Systems with RAG and Large Language Models (LLMs)

Duration: 5 Days • Classroom: Physical • HRDC: Claimable

5.0 (0 Ratings)
What you'll learn
  • Participants will gain an understanding of the basics of Flutter, including how to create layouts, handle user input, manage state, and connect to APIs.
  • Ability to create custom mobile apps.
  • Participants will have Hands-on experience to apply their newly acquired knowledge and skills in a real-world context.
Course description

This 5-day hands-on workshop teaches participants how to design, build, and deploy intelligent systems using Retrieval-Augmented Generation (RAG) and Large Language Models (LLMs). Participants will learn to ingest and index organizational data, build retrieval pipelines, integrate LLMs for context-aware responses, and deploy AI applications in real-world scenarios. The training emphasizes practical application, including automating knowledge retrieval, supporting decision-making, and enhancing customer engagement. Through guided exercises and project-based learning, participants will acquire skills to build scalable AI systems that leverage organizational knowledge and LLMs effectively. By the end of the workshop, attendees will have hands-on experience creating intelligent systems ready for immediate implementation, enabling their organizations to innovate, optimize processes, and derive actionable insights from data with AI.

Course content
  • Overview of Large Language Models (LLMs) and Generative AI
  • Understanding Retrieval-Augmented Generation (RAG)
  • Applications in enterprise and knowledge management
  • Responsible AI, ethics, and bias mitigation
  • Customer support automation
  • Knowledge retrieval and internal wiki automation
  • Decision support systems
  • Case studies of RAG + LLM in industry
  • Structured vs unstructured data
  • Data cleaning and normalization
  • Text vectorization and embeddings
  • Handling multilingual or domain-specific data
  • Loading sample datasets
  • Creating embeddings for RAG
  • Preparing documents for retrieval pipelines
  • Analyze raw data and preprocess it for AI ingestion
  • Generate initial embeddings using open-source libraries
  • Pipeline architectures and workflows
  • Query understanding and retrieval logic
  • Ranking and filtering results
  • Introduction to vector databases (FAISS, Pinecone, Weaviate, Milvus)
  • Indexing and querying embeddings
  • Similarity metrics and optimization
  • Hybrid search: combining keyword and semantic search
  • Relevance tuning and query expansion
  • Handling large datasets efficiently
  • Metrics for accuracy and relevance (precision, recall, MRR)
  • Testing pipelines with example queries
  • Improving performance through feedback loops
  • Build a full retrieval pipeline
  • Index multiple datasets and perform semantic search
  • Evaluate pipeline performance
  • Connecting RAG pipelines with LLM APIs (OpenAI, LLaMA, Anthropic, etc.)
  • Request-response patterns
  • Handling token limits and API optimization
  • Effective prompt design for context-aware generation
  • Few-shot prompting techniques
  • Handling ambiguous or incomplete queries
  • Domain adaptation and embeddings tuning
  • Custom model training for specific knowledge domains
  • Monitoring model outputs for quality
  • Designing question-answering systems
  • Automating FAQs and internal knowledge retrieval
  • Hybrid human-AI workflows
  • Integrate LLM with retrieval pipeline for QA system
  • Experiment with prompts and evaluate outputs
  • Customize responses for domain-specific context
  • Hosting AI systems on cloud platforms (AWS, Azure, GCP)
  • Using containers (Docker, Kubernetes) for AI services
  • Monitoring and logging AI systems
  • Trigger-based pipelines (emails, chatbots, internal apps)
  • Integrating AI workflows with Slack, Teams, or web portals
  • Scheduled updates and automated knowledge indexing
  • Data protection and privacy regulations in Malaysia
  • Secure handling of sensitive data in AI systems
  • Authentication and API security best practices
  • Optimizing embeddings and retrieval speed
  • Scaling systems for large enterprise datasets
  • Load testing and resource management
  • Deploy a QA chatbot to the cloud
  • Automate retrieval of new knowledge documents
  • Test performance and optimize pipeline
  • Define a real-world use case (knowledge management, customer support, or decision support)
  • Implement RAG + LLM end-to-end pipeline
  • Integrate retrieval, LLM generation, and deployment
  • Multi-turn conversation and context tracking
  • Dynamic knowledge updates
  • Personalized responses and adaptive AI workflows
  • User satisfaction metrics
  • Latency, throughput, and system reliability
  • Continuous improvement and retraining strategies
  • Large multimodal models (text, image, audio)
  • RAG for enterprise search, chatbots, and automated insights
  • AI innovation roadmap and strategic planning
  • Complete capstone project and present to instructors
  • Test and optimize AI system with sample queries
  • Explore optional advanced features (multi-turn conversation, personalization)
  • Review and discussion of tools and concepts covered.
  • Q&A session to address any questions.
This course includes:

English

5 Days

Physical Class

Certificate of Completion

HRDC Claimable

Interested in more courses or customized training?

Contact our account manager to explore your options beyond the listed courses.

Whatsapp Account Manager!