Attacks on LLMs through persistent backdoors
Large language models (LLMs) enable attackers to easily manipulate the supply chain by embedding backdoors during training that trigger malicious code and remain persistent. This puts all AI systems based on them at risk, requiring a gold standard for security, transparency, and trust.
Cybersecurity expert Mirko Ross analyzes this risk in detail in the Heise article.


Konrad Buck
Head of Press and Media Relations
Background & Expert Access for Media
I provide journalists with access to in-depth background information beyond our public materials, including:
- Product & technology insights – technical context, solution architecture, and real-world use cases for professional and trade media
- Expert commentary & background talks – our CEO is available as an expert source on current cybersecurity developments, threat landscapes, and the impact of AI on security and regulation
Media contact
I speak openly, fact-based, and without PR spin. I am a former IT journalist with decades of experience in the IT and cybersecurity space, familiar with the highs and lows of the industry. Off-the-record discussions are possible upon request.
I speak openly, fact-based, and without PR spin. I am a former IT journalist with decades of experience in the IT and cybersecurity space, familiar with the highs and lows of the industry. Off-the-record discussions are possible upon request.





