Overview
Anthropic’s decision to restrict access to its latest model, Claude Mythos, has exposed a profound structural weakness in Europe’s AI safety apparatus. The company is limiting the powerful system to a select group of technology partners, citing the unprecedented cybersecurity risks Mythos could pose, particularly concerning large-scale, AI-powered cyberattacks. This move creates an immediate governance crisis: the very tools needed to assess frontier AI risk are being withheld from the regulatory bodies meant to oversee them.
The scope of the exclusion is stark. Under a program called "Project Glasswing," Anthropic granted preview access to 12 major US tech firms, including Apple, Microsoft, and Amazon. An additional 40 organizations received access, but European regulatory agencies remain largely outside this inner circle. While the UK is already conducting advanced testing, Europe’s key cyber agencies report minimal visibility into the model's capabilities, leaving national and continental security planning dangerously uninformed.
The situation transcends a simple regulatory oversight issue. It represents a fundamental lack of technical access, meaning European authorities cannot properly assess the risks or prioritize necessary defenses against a capability that has already moved beyond academic concern and into the realm of immediate national security threat.
The Governance Gap Between Tech and Regulators

The Governance Gap Between Tech and Regulators
The disparity in access between the private sector and European government bodies is glaring. While the German BSI has initiated talks with Anthropic regarding Mythos, these discussions have provided only theoretical insight into the model’s functionality, falling short of direct testing access. The EU’s cybersecurity agency, ENISA, has declined to comment on its engagement with Anthropic, and the EU Commission’s AI Office maintains a dialogue through the Code of Practice, but whether this dialogue includes technical access to Mythos remains unconfirmed.
This limited engagement contrasts sharply with the UK's position. The British AI Security Institute (AISI) recently conducted testing on Mythos Preview and published an assessment detailing its significant leap over previous frontier models. The AISI’s proactive testing and subsequent findings demonstrate a model of rapid, actionable risk assessment that Europe currently lacks.
The inability to test the model directly means that European regulators are forced to rely on corporate assurances rather than empirical data. As BSI chief Claudia Plattner noted, the question of whether such powerful tools will eventually be available on the open market is an urgent one, highlighting the immediate implications for national and European sovereignty.

The Sovereignty Challenge of Frontier AI
The control of models like Mythos by private entities raises deep questions about technological sovereignty. AI pioneer Yoshua Bengio articulated the concern that tech companies, rather than governmental bodies, are dictating the pace and parameters of risk management. This structure centralizes critical security capabilities within the hands of a few private actors, creating a dependency that is inherently risky for national security.
Former European Parliament member Marietje Schaake, who helped shape the EU Code of Practice, echoed this concern, stressing that models with far-reaching societal impacts should not be controlled by a single private company. She argued that the current moment demands an immediate agreement on robust disclosure rules and comprehensive oversight mechanisms that transcend corporate control.
The lack of a unified European plan for securing access to frontier AI capabilities is particularly alarming. Daniel Privitera, founder of the Berlin-based AI nonprofit KIRA, pointed out that the current landscape lacks a defined pathway for governments or third parties to review these capabilities, leaving the continent technologically exposed.
The Race for Technical Visibility
The technical gap is the most immediate threat. Without solid, hands-on details, regulatory bodies are relegated to a purely theoretical risk assessment. The potential for large-scale, AI-powered cyberattacks is no longer a future threat; it is a capability being developed and restricted by private enterprise.
The AISI’s testing of Mythos suggests that the model represents a substantial advancement in cyber capabilities. While the AISI cautioned that definitive proof of attacking highly defended systems is impossible without access, the sheer leap in capability is undeniable. This technical acceleration demands a corresponding acceleration in regulatory and governmental technical capacity.
The current structure forces Europe into a reactive posture, attempting to catch up to a technological curve that is being set and controlled by the companies developing the models. The priority must shift from merely drafting guidelines to establishing verifiable, technical access points for independent security review.


