
Effective protection against extraction of training data from industrial AI models.
STUTTGART, 3 June 2025 – In the KI-FOGGER research project, which has been awarded over €2 million in funding, asvin is working with TRUMPF, Aimino, Fraunhofer IPA and DFKI to develop effective steganographic and cryptographic methods for protecting AI models. The aim of the three-year project, which has now been launched by the BMBF, is to protect AI models against the distillation of training data. The effectiveness of the protection methods will be tested directly in test setups in mechanical and plant engineering.
‘The financial damage caused by unauthorised data extraction from AI models now runs into millions,’ says asvin CEO Mirko Ross, emphasising the urgency of the research project and giving a direct example: “The Chinese AI provider Deepseek was recently accused of building its own model based on training data extracted from its US competitors. Distillation allows a competitor to train its own models using the knowledge base of its competitors. This saves time and money in the collection and qualification of training data. I consider this to be an unfair practice that leads to distortion of market shares.”
AI in mechanical and plant engineering is a key factor in the competitiveness of German industry. A great deal of expertise is contained in the training data for AI models. If training data can be extracted from AI models by competitors, the expertise advantage and thus competitive advantages are lost. To retain both, AI models must be protected against extraction. A promising method for this is provided by steganographic techniques, in which meaningful training data is mixed with meaningless data. Only with the appropriate cryptographic key can the user distinguish between meaningful and meaningless data. This makes it impossible to distil training data without the appropriate key.
The focus of KI-FOGGER‘s research is on training AI models: How can training data be artificially blurred in models and how must cryptographic markers be trained so that the artificial blurring can be neutralised? Researchers at asvin Labs are answering these and other questions in several steps:
- Creating threat analyses for AI systems in industrial plants.
- Developing attack tools for distilling training data in the AI models under investigation.
- Carrying out attacks to distil AI models.
- Developing defence methods for distilling training data from AI models using steganographic*1methods and cryptographic keys and markers
In the project, the asvin AI experts are comprehensively investigating steganographic methods for their potential application in the model training process. The result should be that meaningful model outputs can be hidden in a steganographically ‘noisy’ set of results. The results will be made available to the project partners and then tested in an industrial demonstrator at TRUMPF. The focus here is on AI development processes in mechanical and plant engineering. According to Ross, those who take precautions here will be able to capture and maintain new market shares with AI. In addition, the project partners are making their cryptographic and steganographic methods available for peer review in the cryptographic and AI security community.
If you have any questions about AI training methods, steganography and extraction incidents, we would be happy to arrange a background discussion with asvin CEO Mirko Ross.
about asvin:
asvin GmbH, based in Stuttgart, offers solutions and services for cybersecurity analysis with a focus on automotive, IT/OT and critical infrastructures. Asvin combines cybersecurity with knowledge graphs and AI approaches to risk by context risk management and risk prioritisation. Further information is available at https://asvin.io/
Press contact: