Assume breach - AI Security

AI makes assume breach the new reality

Malicious AI code evolves too fast for patches to protect critical infrastructure

AI is making “assume breach” the new reality.
When AI generates malicious code in a fraction of a second, patches and updates are no longer sufficient to effectively protect our critical infrastructure.

Disclaimer:

In its latest report, “THE STATE OF IT SECURITY IN GERMANY 2025,” the German Federal Office for Information Security (BSI), Germany’s top cybersecurity authority, concludes:

 “The worse an attack surface is protected, the more likely a successful attack becomes. In contrast, consistent attack surface management – such as restrictive access management, timely updates, or minimizing publicly accessible systems – directly reduces the risk of successful attacks.”

This statement falls far short of the mark in view of the unchecked spread of AI. Attackers are either already inside or in the process of bypassing even the best defenses (assumed breach). The term “prompt update” will become less relevant as a protective measure in the coming weeks and months due to the possibilities of AI-automated attacks, as automation reduces the time window between the discovery of a vulnerability and its exploitation by AI attacks to a few minutes or even seconds. This does not mean that we should abandon system security maintenance through patches and updates, but that we should no longer be under any illusions about the success of securing a system through such practices.

Why AI leads us to believe that systems are already compromised

The landscape of cybersecurity is changing at an alarming rate due to AI.
In the era before LLMs, writing malicious code for vulnerabilities, testing it, and applying it was a time-consuming, mostly manual task performed by highly specialized red teams. Today, powerful AI models can be used to automate and scale a large part of the manual work involved in pentesting. In particular, the AI-based generation of malicious code – from polymorphic malware to the automated discovery of vulnerabilities – exacerbates a fundamental problem: an increasingly narrow window of opportunity to close vulnerabilities through updates and patches.

The new cybersecurity reality: AI-generated malicious code as a weapon

Traditional cybersecurity has long been based on signature recognition methods. Antivirus programs and intrusion detection systems searched files and memory for known patterns (“signatures”) of malicious software. This method is now insufficient or even ineffective against the new generation of AI-generated  threats.

Here are a few examples:

AI Cybersecurity

Polymorphic malware on the assembly line thanks to generative AI:

AI models, especially large language models (LLMs), can not only generate code, but also cause it to mutate constantly.
Polymorphic malware continuously changes its signature and appearance without losing its core function. This makes it invisible to classic, signature-based security tools. A known pattern that was detected yesterday is already a new, unknown variant today. The LLM can mutate malicious code at almost any speed and generate new variants with new signatures. As a result, the training of signature recognition in antivirus software and intrusion detection systems is increasingly flooded with new malicious code variants. By the time the defense systems are trained and delivered, their information base for signature recognition is already outdated.

Automated hunt for zero-day vulnerabilities:

LLMs in particular can analyze huge amounts of code and find vulnerabilities based on learned error patterns. As Anthropic’s models have already successfully demonstrated, AI can automate this process and search specifically for zero-day vulnerabilities. And in doing so, generate security vulnerabilities that are still unknown to the developers themselves and for which there is therefore no patch yet. In the near future, these AI-automated attacks will logically lead to an “automated zero-day attack factory.”

The AI dilemma of zero-day exploits:

By definition, a zero-day exploit is an attack that takes place before a patch or update is available.
This means:

  • No warning: Security teams have no time to prepare.

  • Long incubation period: It can take days, months, or even years for such a vulnerability to be discovered and fixed. During this time, the system is already compromised.

  • AI now dramatically shortens the time between the discovery of a zero-day vulnerability and its exploitation.
    Where attackers used to need weeks or months to develop a working exploit, AI can now develop zero-day exploits in hours, minutes, or even seconds using pure “brute force” over dozens of iterations of automatically generated malicious code variants.

The new security premise must be “assume breach.”

Given this reality, the “patch and update” paradigm of cybersecurity must change. We can no longer assume that our systems are secure just because all patches have been installed.

Instead, we must internalize the status “assume breach”: always assume that your systems have been compromised.

The consequences for cybersecurity:

The focus must shift from signature detection to behavioral analysis. Systems must be able to detect whether processes are behaving atypically, even if the code is unknown. Interestingly, AI is a helpful tool here for defending against attacks by AI.

In the age of AI, no user, device, or application can be trusted anymore. Continuous monitoring and strict micro-segmentation are necessary to limit the spread of an attack in the event of a compromise, thereby increasing the resilience of the infrastructure. The ability to quickly detect, contain, and recover from an attack will be more important than pure prevention through updates and patches.

The age of AI-automated malware means that we must abandon any hope of complete protection. Instead, we must invest in infrastructure that is designed for resilience and rapid response. The “enemy” is faster, and we need to adapt our defense strategies accordingly. Therefore, we must constantly assume that the attack has already taken place before we know about it. Again, assume breach must be internalized.

Lock with asvin logo

At asvin Labs, our cybersecurity research has successfully demonstrated how AI can be used to generate malicious code for a current vulnerability in a matter of seconds and apply it using AI agents. The management of cybersecurity risks and analysis tools for planning effective defensive measures is therefore essential.

With Risk-By-Context and our tools for context-optimized cyber threat intelligence (CTI), we are making an important contribution to overcoming the challenges posed by AI-based attacks.