Mirko Ross discusses jailbreak attacks on GenAI in an interview with heise online.
- How secure is the Model Context Protocol for AI systems and agents?
- What cybersecurity considerations are there?
The Model Context Protocol (MCP), developed in November 2024, enables the simple and fast integration of applications with generative AI.
While MCP is practical, it comes with numerous security vulnerabilities. Jailbreak attacks can be carried out in various ways, such as through compromised libraries, manipulated AI-generated source code, or hidden prompts in documents.
Real protection requires technical measures and very careful use. Mirko Ross, who has been working on AI security for a long time, highlights the main vulnerabilities of the MCP and provides helpful tips on how to defend against them in an interview with Wolf Hosbach on heise online.
Read the full interview to get an overview of the most common security vulnerabilities:


Konrad Buck
Head of Press and Media Relations
Background & Expert Access for Media
- Product & technology insights – technical context, solution architecture, and real-world use cases for professional and trade media
- Expert commentary & background talks – our CEO is available as an expert source on current cybersecurity developments, threat landscapes, and the impact of AI on security and regulation
I speak openly, fact-based, and without PR spin. I am a former IT journalist with decades of experience in the IT and cybersecurity space, familiar with the highs and lows of the industry. Off-the-record discussions are possible upon request.





