JAILBREAKING on LLMs: Attacks on rule settings and prompt filters

On 29 October 2025, Mirko Ross will provide insights into current attack vectors on large language models (LLMs) during the online deep dive by Minds Mastering Machines and demonstrate how we can effectively protect ourselves against them.

Chatbots and AI agents are actually protected from unethical or illegal actions by rules and prompt filters. However, with the right technical know-how, these protective measures can be attacked and circumvented.

In the presentation Jailbreaking in LLMs, you will learn:

  • which jailbreaking methods are currently in use
  • why such attacks pose a real risk to companies, and
  • which protective measures are useful for making AI systems more resilient to abuse.

👉 Register now and join us: https://www.m3-konferenz.de/llm.php#programm

Minds Masterin Machines - LLMs in Business