Generative AI with Large Language Models

(CTU-AI325.AJ1)
Lessons
Lab
AI Tutor (Add-on)
Get A Free Trial

Skills You’ll Get

1

Fundamentals of Generative AI and LLMs

  • Foundation Models
  • A brief history of how transformers were born
  • The new role of AI professionals
  • The rise of seamless transformer APIs
  • The rise of the Transformer: Attention Is All You Need
  • Training and performance
  • Hugging Face transformer models
2

Optimization Techniques for Scalable LLMs

  • Why Mixture of Experts?
  • Architecture & Algorithmic Design
  • System Engineering Challenges
  • Why Knowledge Distillation?
  • Comparison between Knowledge Distillation and Traditional Approaches
  • Distillation Strategies: Architectures of Transfer
3

Fine-Tuning LLMs for Domain-Specific Tasks

  • GPTs as GPTs
  • The architecture of OpenAI GPT transformer models
  • OpenAI models as assistants
  • Getting started with the GPT-4 API
  • Retrieval Augmented Generation (RAG) with GPT-4
4

Retrieval-Augmented Generation (RAG) Systems

  • The architecture of BERT
  • Fine-tuning BERT
  • Building a Python interface to interact with the model
  • Risk management
  • Fine-tuning a GPT model for completion (generative)
  • Preparing the dataset
  • Fine-tuning an original model
  • Running the fine-tuned GPT model
  • Managing fine-tuned jobs and models
  • Before leaving
5

Controlling Hallucination and Ensuring Factuality in LLMs

  • The emergence of functional AGI
  • Cutting-edge platform installation limitations
  • Auto-BIG-bench
  • WandB
  • When will AI agents replicate?
  • Risk management
  • Risk mitigation tools with RLHF and RAG

1

Fundamentals of Generative AI and LLMs

  • Training, Evaluating, and Visualizing a Machine Learning Classifier
  • Implementing Multi-Head Attention and Post-Layer Normalization
  • Exploring Positional Encoding in Transformer Models
2

Optimization Techniques for Scalable LLMs

  • Simulating Efficiency in Dense vs Sparse MoE Models
  • Implementing a Top-k Gating Router for MoE
  • Debugging an Expert Collapse in MoE
  • Implementing Real-Time NLP Sentiment Analysis Using Attention-Based Distillation
  • Visualizing the Transfer of Knowledge Between Teacher and Student Models
3

Fine-Tuning LLMs for Domain-Specific Tasks

  • Analyzing GPT Transformer Architecture and OpenAI Model APIs
  • Getting Started with OpenAI GPT-4 for NLP Tasks
  • Implementing RAG Using GPT-4
4

Retrieval-Augmented Generation (RAG) Systems

  • Fine-Tuning BERT for Sentence Classification Using the CoLA Dataset
5

Controlling Hallucination and Ensuring Factuality in LLMs

  • Evaluating Auto-BIG-bench Tasks
  • Evaluating and Mitigating Hallucination in RAG Systems
  • Mitigating Risks in Generative AI Systems

Any questions?
Check out the FAQs

Still have unanswered questions and need to get in touch?

Contact Us Now

Related Courses

All Courses
scroll to top