
asvin labs is involved in two research consortia in Germany and Europe working on securing AI in distributed systems. AI plays a crucial role in the traffic safety of tomorrow. In the field of autonomous driving in particular, AI systems will be rolled out in large numbers on vehicles (edge) and operated in the cloud in the future. The complex application architectures that result from this greatly increase the attack surface for attackers. However, the results from EdgeAI-Trust will enable these systems to be operated securely and efficiently.
Cyber security experts at asvin labs are working with international researchers and developers to find solutions for greater security and trust for AI systems at the network edge. The challenges to be solved are manifold. The security of an AI system must be ensured across the entire supply chain, from model training and model distribution to operation in a vehicle. This is the only way to ensure that the behaviour of an AI-controlled vehicle system in road traffic meets a defined quality standard and that its decisions are traceable in a forensic context.
AI security requires security throughout the entire application lifecycle
asvin Labs’ research and development project focuses on verifying the integrity of AI models in the supply chain. The aim is to enable the training status of a model to be compared with the active model on a vehicle.

“In the future, AI models will have a decisive influence on the behaviour of vehicles and thus also on the safety of road users. An AI model is not static; it will change over the course of vehicle use. This also potentially changes the decision-making scope and behaviour of the model over its operating life. As vehicle systems become increasingly complex, it will be extremely important to be able to reconcile changes in AI models and their revisions when controlling a vehicle. This is the only way to detect manipulation by cyber attacks or quality problems during operation.“
Rohit Bohara, Research Director at asvin labs
As part of the project, asvin is developing tools that allow the integrity of an AI model to be recorded and tracked throughout its lifecycle in the vehicle. To do this, the metrics of a ‘trustworthy AI’ must be recorded during the training process and stored cryptographically in an unalterable form similar to a fingerprint. This fingerprint of an AI model can then be compared with the model installed on a vehicle at any time. Since the fingerprint is cryptographically secured, a manipulated AI model on a vehicle can be identified without a doubt. The time and place where a vehicle was manipulated, for example by replacement with a malicious AI model, are also visible and can therefore be remedied.
Focus on learning and distributed autonomous AI systems.
AI models for learning autonomous systems pose a particular challenge here. The models do not remain static, but continuously learn new traffic situations in the vehicle and in the swarm from data from other vehicles in a manufacturer’s fleet. Integrity must be measured over time and across a swarm of many distributed AI systems and recorded in a cryptographic fingerprint.

“With Edge AI Trust, we are developing a system that will deliver added value, particularly for the forensic analysis of autonomous systems. With the help of cryptographic containers, it will be possible to obtain reliable values for trustworthy system behaviour. These values can then be compared with the system under investigation during testing. This allows deviations to be identified and the type and time of manipulation to be determined. This opens up new possibilities for quality assurance, both for vehicle manufacturers and suppliers, as well as for technical services and authorities in testing procedures.”
Mirko Ross, CEO at asvin
International consortium for AI security in software-defined vehicles (SDV).
AI-based autonomous systems in software-defined vehicles (SDV) will gain the necessary transparency in their life cycle with Edge-AI-Trust.
asvin labs is working with suppliers and research institutions in Edge-AI-Trust.
Funding
Edge-AI-Trust is supported by the German Federal Ministry of Education and Research and the European Union.
