llm

  1. protectaccount

    A Deep Dive into LLM Red Teaming

    Learn prompt injection, jailbreak tactics, indirect attacks, and LLM vulnerability testing from beginner to advanced. What you’ll learn Identify and exploit common LLM vulnerabilities like prompt injection and jailbreaks. Design and execute red teaming scenarios to test AI model behavior...
  2. protectaccount

    LLM Prompt Injection: Attacks and Defenses

    Integrating LLMs into an application can enhance productivity, but without security considerations, there are risks. This course teaches key practices for implementing LLMs securely and demonstrates how to test those implementations for weaknesses. What you’ll learn: LLMs need to be...
  3. M

    A Local GPT

    Im running a local Llama LLM and was wondering if I could get some good insite as to where I can find dark datasets. I want to fine tune the model as a mini fraud gpt.
Top