protectaccount
Advanced Member
- Joined
- December 27, 2025
- Messages
- 354
- Reaction score
- 276
- Points
- 63
- Thread Author
- #1
You will develop a functional GenAI Security Lab using Python, Streamlit, and local LLMs such as Ollama, Llama 3, and Phi-3. By assuming the roles of both the Attacker (Red Team) and the Defender (Blue Team), you will learn how to identify vulnerabilities, execute exploits, and implement code-level fixes.
What You Will Build and Test:
The course features a modular cyber range with over 15 live labs covering the OWASP Top 10 for LLMs. Key topics include:
Prompt Injection: Learn how to bypass chatbot system instructions and safety constraints.
Remote Code Execution (RCE): Explore how LLMs can be manipulated into executing shell commands on a host
server.
Indirect Injection: Understand how external data sources, such as resumes or emails, can compromise the AI models processing them.
RAG Data Poisoning: Study methods to corrupt corporate knowledge bases to influence AI output.
Model Denial of Service: Identify ways to trap autonomous agents in loops or force unauthorized resource consumption.
Training Data Poisoning: Examine how hidden triggers can be planted within a model’s training set.
Target Audience:
Penetration Testers: Professionals looking to expand their skill set into Generative AI security assessments.
Developers: Software engineers focused on building secure, production-grade LLM applications.
Security Enthusiasts: Individuals interested in running advanced AI hacking labs on local hardware, including resource-efficient setups like an Intel NUC.
Requirements:
A basic understanding of the Python programming language.
No dedicated GPU is required, as the labs are optimized for CPU-based local models.
By the conclusion of this course, you will have developed the technical skills, payloads, and practical experience necessary to exploit and patch AI vulnerabilities in professional environments.
To see this hidden content, you must reply and react with one of the following reactions :
Like