Security of Artificial Intelligence

(CTU-CSS315.AJ1)
Lessons
Lab
AI Tutor (Add-on)
Get A Free Trial

Skills You’ll Get

1

Data Poisoning and Model Integrity

  • Basics of poisoning attacks
  • Staging a simple poisoning attack
  • Backdoor poisoning attacks
  • Hidden-trigger backdoor attacks
  • Clean-label attacks
  • Advanced poisoning attacks
  • Mitigations and defenses
  • Traditional supply chain risks and AI
  • AI supply chain risks
  • Data poisoning
  • AI/ML SBOMs
  • Poisoning embeddings in RAG
  • Poisoning attacks on fine-tuning LLMs
2

Defending Against Adversarial Attacks

  • Injecting backdoors using pickle serialization
  • Injecting Trojan horses with Keras Lambda layers
  • Trojan horses with custom layers
  • Neural payload injection
  • Attacking edge AI
  • Model hijacking
  • Fundamentals of evasion attacks
  • Perturbations and image evasion attack techniques
  • NLP evasion attacks with BERT using TextAttack
  • Universal Adversarial Perturbations (UAPs)
  • Black-box attacks with transferability
  • Defending against evasion attacks
  • Use of GANs for deepfakes and deepfake detection
  • Using GANs in cyberattacks and offensive security
  • Defenses and mitigations
3

Preventing Model Inversion Attacks

  • Understanding model inversion attacks
  • Types of model inversion attacks
  • Example model inversion attack
  • Understanding inference attacks
  • Attribute inference attacks
  • Example attribute inference attack
  • Membership inference attacks
  • Privacy-preserving ML and AI
  • Simple data anonymization
  • Advanced anonymization
  • Differential privacy (DP)
  • Federated learning (FL)
  • Split learning
  • Advanced encryption options for privacy-preserving ML
  • Advanced ML encryption techniques in practice
  • Applying privacy-preserving ML techniques
4

Research Methods in AI Security

  • Secure by design AI
  • Building our threat library
  • Industry AI threat taxonomies
  • AI threat taxonomy mapping
  • Threat modeling for AI
  • Threat modelling in action
  • Enhanced FoodieAI threat model
  • Risk assessment and prioritization
  • Security design and implementation
  • Testing and verification
  • Shifting left – embedding security into the AI life cycle
  • Live operations
  • Beyond security – Trustworthy AI
  • Enterprise security AI challenges
  • Foundations of enterprise AI security
  • Protecting AI with enterprise security
  • Operational AI security
  • Iterative enterprise security
5

Secure Deployment of AI Systems

  • Understanding privacy attacks
  • Stealing models with model extraction attacks
  • Defenses and mitigations
  • The MLSecOps imperative
  • Toward an MLSecOps 2.0 framework
  • Building a primary MLSecOPs platform
  • MLSecOps in action
  • Integrating MLSecOps with LLMOps
  • Advanced MLSecOps with SBOMs

1

Data Poisoning and Model Integrity

  • Demonstrating a Simple Data Poisoning Attack
  • Demonstrating a Backdoor Data Poisoning Attack
  • Simulating and Detecting a Data Poisoning Attack
2

Defending Against Adversarial Attacks

  • Exploiting Pickle Serialization Vulnerability
  • Crafting a Neural Payload Attack
  • Performing a Black-Box Adversarial Attack
3

Preventing Model Inversion Attacks

  • Performing a Model Inversion Attack
  • Performing an Attribute Inference Attack on the CIFAR-10 CNN Model
  • Implementing Image Anonymization Techniques
  • Building a Basic Chat LLM Application
  • Implementing DP in Model Training
4

Research Methods in AI Security

  • Understanding Secure Design, Threats, and Trustworthy AI
  • Strengthening Enterprise AI Security Maturity
5

Secure Deployment of AI Systems

  • Performing a Model Extraction Attack

Related Courses

All Courses
scroll to top