Optimization Methods for AI

(CTU-AI345.AU1)
Lessons
Lab
AI Tutor (Add-on)
Get A Free Trial

Skills You’ll Get

1

Regularization Techniques for Generalization

  • Linear Regression
  • Logistic Regression
  • Generalized Linear Models
  • Support Vector Machines
  • Regularization, Lasso, and Ridge Regression
  • Population Risk Minimization
  • Neural Networks
2

Gradient Descent and Its Variants

  • Subgradient Descent
  • Mirror Descent
  • Accelerated Gradient Descent
  • Game Interpretation for Accelerated Gradient Descent
  • Smoothing Scheme for Nonsmooth Problems
  • Primal–Dual Method for Saddle-Point Optimization
  • Alternating Direction Method of Multipliers
  • Mirror-Prox Method for Variational Inequalities
  • Accelerated Level Method
  • Stochastic Mirror Descent
  • Stochastic Accelerated Gradient Descent
  • Stochastic Convex–Concave Saddle Point Problems
  • Stochastic Accelerated Primal–Dual Method
  • Stochastic Accelerated Mirror-Prox Method
  • Stochastic Block Mirror Descent Method
3

Convergence Analysis of Optimization Algorithms

  • Convex Sets
  • Convex Functions
  • Lagrange Duality
  • Legendre–Fenchel Conjugate Duality
  • Unconstrained Nonconvex Stochastic Optimization
  • Unconstrained Nonconvex Stochastic Optimization Part B
  • Nonconvex Stochastic Composite Optimization
  • Nonconvex Stochastic Block Mirror Descent
  • Nonconvex Stochastic Accelerated Gradient Descent
  • Nonconvex Variance-Reduced Mirror Descent
  • Randomized Accelerated Proximal-Point Methods
4

Convex vs. Non-Convex Optimization in AI

  • Random Primal–Dual Gradient Method
  • Random Primal–Dual Gradient Method Part B
  • Random Gradient Extrapolation Method
  • Random Gradient Extrapolation Method Part B
  • Variance-Reduced Mirror Descent
  • Variance-Reduced Accelerated Gradient Descent
5

Advanced Gradient-Based Optimization

  • Conditional Gradient Method
  • Conditional Gradient Sliding Method
  • Nonconvex Conditional Gradient Method
  • Stochastic Nonconvex Conditional Gradient
  • Stochastic Nonconvex Conditional Gradient Sliding

1

Regularization Techniques for Generalization

  • Performing Linear Regression Using OLS
  • Performing Logistic Regression for Binary Classification
  • Performing Classification Using SVM
  • Training a Neural Network Using the Adam Optimizer
2

Gradient Descent and Its Variants

  • Comparing the Convergence of Optimizers on a Loss Landscape
  • Applying SMD on a Convex Function
  • Implementing the SAGD Algorithm
  • Optimizing Stochastic Convex–Concave Saddle Points
3

Convergence Analysis of Optimization Algorithms

  • Exploring and Visualizing Convex Sets Using Python
  • Analyzing and Visualizing Convex Functions with Python
  • Visualizing Legendre-Fenchel Conjugate Duality
  • Improving Model Performance with Regularization
  • Implementing the RPDG Method on Distributed Data
  • Simulating RGE for Multi-Worker Training
  • Solving Convex and Non-Convex Optimization Problems
  • Implementing Nonconvex Stochastic Optimization
  • Comparing Nonconvex Mirror Descent and Accelerated Gradient Descent
4

Advanced Gradient-Based Optimization

  • Implementing Conditional Gradient Algorithm
  • Implementing the SCG Algorithm
  • Fine-Tuning a Pretrained Model with Advanced Optimizers

Related Courses

All Courses
scroll to top