Prompt Injection: How Cyberattackers Exploit AI Systems
In a talk on Update Wirtschaft today, Mirko Ross discussed with Bettina Seidl how attackers manipulate AI models by embedding hidden instructions in websites, email footers, or PDFs. These “prompt injections” trick AI into performing unintended or even harmful actions — from generating misleading outputs to causing data leaks.
The issue:
AI follows what it interprets as a command, even when it’s cleverly disguised. Current protection mechanisms are still in their infancy, since traditional IT security strategies often fail when applied to AI models.
The key now is prevention.
Never grant AI applications unrestricted access rights, and limit their use to verified, trustworthy platforms. Companies should establish clear policies for secure AI usage — to avoid discovering too late that stolen data has already surfaced on the dark web.
https://www.ardmediathek.de/video/update-wirtschaft/update-wirtschaft-vom-31-10-2025/tagesschau24/


Konrad Buck
Head of Press and Media Relations
Background & Expert Access for Media
- Product & technology insights – technical context, solution architecture, and real-world use cases for professional and trade media
- Expert commentary & background talks – our CEO is available as an expert source on current cybersecurity developments, threat landscapes, and the impact of AI on security and regulation
I speak openly, fact-based, and without PR spin. I am a former IT journalist with decades of experience in the IT and cybersecurity space, familiar with the highs and lows of the industry. Off-the-record discussions are possible upon request.





