Prompt Injection: How Cyberattackers Exploit AI Systems

In a talk on Update Wirtschaft today, Mirko Ross discussed with Bettina Seidl how attackers manipulate AI models by embedding hidden instructions in websites, email footers, or PDFs. These “prompt injections” trick AI into performing unintended or even harmful actions — from generating misleading outputs to causing data leaks.

The issue:
AI follows what it interprets as a command, even when it’s cleverly disguised. Current protection mechanisms are still in their infancy, since traditional IT security strategies often fail when applied to AI models.

The key now is prevention.
Never grant AI applications unrestricted access rights, and limit their use to verified, trustworthy platforms. Companies should establish clear policies for secure AI usage — to avoid discovering too late that stolen data has already surfaced on the dark web.

https://www.ardmediathek.de/video/update-wirtschaft/update-wirtschaft-vom-31-10-2025/tagesschau24/

Mirko Ross in ARD on prompt injection