Course OSCP for AI: The GenAI Security Sandbox

protectaccount

protectaccount

Advanced Member
Joined
December 27, 2025
Messages
354
Reaction score
277
Points
63
photo-2026-01-21-07-44-29.jpg

Build a comprehensive understanding of AI security by constructing a Vulnerable LLM Cyber Range. Large Language Models are increasingly integrated into various systems, from customer-facing chatbots to critical infrastructure. This hands-on course transitions from theoretical AI safety to practical security testing.

You will develop a functional GenAI Security Lab using Python, Streamlit, and local LLMs such as Ollama, Llama 3, and Phi-3. By assuming the roles of both the Attacker (Red Team) and the Defender (Blue Team), you will learn how to identify vulnerabilities, execute exploits, and implement code-level fixes.

What You Will Build and Test:

The course features a modular cyber range with over 15 live labs covering the OWASP Top 10 for LLMs. Key topics include:

Prompt Injection: Learn how to bypass chatbot system instructions and safety constraints.

Remote Code Execution (RCE): Explore how LLMs can be manipulated into executing shell commands on a host
server.

Indirect Injection: Understand how external data sources, such as resumes or emails, can compromise the AI models processing them.

RAG Data Poisoning: Study methods to corrupt corporate knowledge bases to influence AI output.
Model Denial of Service: Identify ways to trap autonomous agents in loops or force unauthorized resource consumption.

Training Data Poisoning: Examine how hidden triggers can be planted within a model’s training set.

Target Audience:

Penetration Testers: Professionals looking to expand their skill set into Generative AI security assessments.

Developers: Software engineers focused on building secure, production-grade LLM applications.

Security Enthusiasts: Individuals interested in running advanced AI hacking labs on local hardware, including resource-efficient setups like an Intel NUC.

Requirements:

A basic understanding of the Python programming language.
No dedicated GPU is required, as the labs are optimized for CPU-based local models.
By the conclusion of this course, you will have developed the technical skills, payloads, and practical experience necessary to exploit and patch AI vulnerabilities in professional environments.


To see this hidden content, you must reply and react with one of the following reactions : Like Like
 
  • Like
Reactions: anarov989 and HackAlice
A

anarov989

Active Member
Joined
December 14, 2025
Messages
61
Reaction score
1
Points
6
you are the best
 
  • Tags
    ai course genai oscp penetration testers: rag data poisoning security
  • Top