How AI sovereignty security exposes the risks of legacy infrastructures
- Marc Griffith

- 10 hours ago
- 5 min read

Summary European investments in digital sovereignty and AI conceal a concrete problem: legacy systems supporting critical infrastructure remain exposed. A new AI-native vulnerability research framework has identified a critical zero-day (CVE-2026-32746) in GNU Inetutils telnetd, showing how protecting historic infrastructure is crucial for national security and AI sovereignty strategies. Key takeaways
|
Introduction
AI sovereignty security must start with the ability to identify and fix vulnerabilities in legacy systems that support critical infrastructures. As Europe accelerates investments in artificial intelligence, cloud, and data systems to reduce dependence on external suppliers, there is also a growing need to consider the security of less visible but pervasive technologies that govern energy, transportation, healthcare, and public services.
The problem hidden under the word 'sovereignty'
Many public infrastructures are built on hybrid technology stacks that combine modern components with legacy software and devices that are hard to update. These layers create blind spots that traditional cybersecurity approaches struggle to map and protect in a systematic way.
An AI-native framework for vulnerability research
The novelty proposed by DREAM is an AI-native security research framework that uses multiple agents to analyze source code, binaries, network protocols, and system behavior at scale. The multi-agent approach enables broad and deep analyses across fragmented and heterogeneous attack surfaces, overcoming the limits of manual analysis conducted by small teams.
How it works, in practice
The framework coordinates specialized modules that simulate, extract, and evaluate network and process behaviors to trace conditions that can be exploited by advanced attackers. This approach enables the automation of dangerous pattern discovery in both modern code and components no longer receiving regular updates.
The multi-agent design enables horizontal analysis of entire infrastructures, integrating outputs from static and dynamic analyses to identify vulnerabilities that would otherwise be invisible.
The demonstration: CVE-2026-32746 in GNU Inetutils telnetd
As proof of the system's potential, DREAM revealed a critical zero-day (CVE-2026-32746) in GNU Inetutils telnetd, with a severity of 9.8, allowing unauthenticated remote code execution. The flaw is triggered during the Telnet handshake, before authentication, so a single connection can compromise a system in many deployments where telnetd runs with elevated privileges.
Why this matters for technology builders
Telnet is an aged protocol but still present in network equipment, embedded systems, and operational technology; its exposure shows how even obsolete services can pose a systemic risk. Public measurements show hundreds of thousands of Telnet services still reachable from the Internet, many of which are difficult to update or replace quickly.
A single flaw in a legacy service can compromise entire operational chains; for digital sovereignty, this represents a strategic weak point.
Implications for technological sovereignty
AI sovereignty plans will not work if the underlying infrastructure remains vulnerable and unupdateable. Even when the EU and national governments invest in controlled platforms and trusted environments for AI, these systems will operate on infrastructures that often include legacy components and neglected services.
What this means for policymakers and businesses
The priority must be a strategy that combines AI capability development with operational planning for inventory, monitoring, and patching of legacy components. Without targeted actions, the attack surface remains broad and efforts to build sovereign AI ecosystems risk being undermined.
Key quotes from researchers
This framework allows us to investigate complex technologies in a fundamentally different way, said Kfir Fleischer, VP of Research & Product at DREAM. Fleischer emphasized that much of government infrastructure still relies on legacy and neglected technologies, despite being targets for advanced actors.
We're entering a world where AI systems will become part of the national infrastructure, added Amir Becker, Chief Business and Strategy Officer of DREAM. Becker highlighted that understanding vulnerabilities through these systems will be one of the dominant security challenges of the next decade.
Critical debate paragraph
The discussion around AI sovereignty security cannot be limited to building national models and data centers: a holistic approach is required, incorporating governance, maintenance resources, and mitigation plans for legacy components. On one hand, digital sovereignty initiatives are inevitable and strategic: controlling critical technology stacks offers independence, reduces external vendor exploits, and improves compliance guarantees. On the other hand, focusing on new stacks may divert resources from managing the most fragile elements of the IT ecosystem, such as embedded devices, outdated firmware, and obsolete protocols. An exclusively technocratic approach that centers only on building national AI infrastructures risks creating a false sense of security if concrete lifecycle management programs for legacy components are not put in place: certified inventories, temporary mitigation measures (compensating controls), network segmentation, and automated, large-scale penetration testing. Moreover, there is a practical funding dilemma: states can fund AI centers but the day-to-day maintenance of millions of devices across critical sectors is often the responsibility of local authorities or third parties with limited budgets. This operational gap opens opportunities for private specialized services and public-private partnerships, but imposes stringent accountability and transparency requirements. Finally, dependence on AI automation for vulnerability discovery raises ethical and legal questions about disclosure and responsibility in case of publicly disclosed exploits: how to balance the need to fix flaws quickly with the risk of exposing sensitive information? In short, the ideal strategy combines investments in sovereign AI capabilities with practical hardening and distributed governance programs, engaging public and private actors in a shared, fundable roadmap.
Practical actions for founders and CTOs
Assess comprehensive inventories of your dependencies, including legacy components and obsolete protocols, and integrate automated attack-surface tests into development and deployment pipelines. Here are concrete steps: map exposed services (including those hard to update), apply network segmentation, introduce compensating controls, and define update/rollback plans for embedded systems.
Quick checklist
1) Identify exposed legacy services; 2) Prioritize patching based on operational risk; 3) Automate scans and threat-hunting; 4) Define disclosure and recovery policies. These activities reduce exposure windows and improve overall resilience of the corporate IT ecosystem.
Conclusion: putting defense at the heart of sovereignty
AI sovereignty security requires that Europe’s digital sovereignty strategy include operational measures to update, monitor, and mitigate legacy components in critical infrastructures. Investing in AI-native capabilities for vulnerability research is essential, but it must be paired with governance policies, funding, and public-private collaboration to turn discovery into effective action.
For those building AI products and services, the message is clear: integrating legacy management from the earliest design stages is a competitive lever and a security necessity.




