The European Union unit tasked with scrutinizing Anthropic’s new elite hacking AI model lacks access to the technology and the experts needed to stave off a looming cybersecurity crisis.
AI safety advocates and people in regular discussions with the EU’s AI Office warned that the division quickly needs extra staff and is too low in the EU executive’s hierarchy to respond to a major crisis with force.
“An appropriately resourced regulator is needed to address” the threats of new hacking AI, a coalition of eight AI safety groups said in a letter to the European Commission first reported by POLITICO on Thursday.
The issues undermine the EU’s ambitions to be the world’s top tech regulator. The bloc’s officials have so far failed to gain full access to Anthropic’s Mythos model, which the firm is shielding from the public out of cybersecurity concerns and sharing it only with a limited group of tech firms and unnamed organizations.
“The Commission is currently not one of the 40 unnamed organizations that have access,” Commission spokesperson Thomas Regnier said Thursday.
It is feared that Mythos outperforms humans in finding and exploiting software vulnerabilities, triggering worries that the model, once public, would allow malicious hackers to find their way into all sorts of critical IT infrastructure.
Anthropic said it had worked with the U.S. government to prevent a major cyber crisis. The United Kingdom’s AI Security Institute also gained access and released a detailed technical analysis within a week of Anthropic’s announcement, which was widely praised for shedding light on the risks of the model.
The U.K. achievement shows “it’s possible,” said Jimmy Farrell, EU AI policy lead at think tank Pour Demain, one of the signatories of the joint letter. “Europe can do the same, and Mythos makes this all the more urgent.”
The EU’s AI Office has struggled to step up. The bureau is less than two years old and has around 140 staffers. Its safety unit, which is primarily responsible for handling the most complex and advanced models, has 36 staff members.
The EU’s response to these threats hinges on its ability to hire knowledgeable people or to foster ties with leading AI companies — something the U.K. has done a better job of than Brussels. The EU executive has to compete with exorbitant industry salaries and more competitive AI hubs, including London.
It’s “clearly visible” that the U.K.’s AI Security Institute has the capability to “push at the very frontier of what’s possible in the scientific field,” said Stanislav Fort, chief scientific officer at European AI security firm Aisle.
“I think this capability is not present right now at the EU AI Office, and it would be amazing to have,” he added.
According to Regnier, the Commission spokesperson, “the AI Office has built state-of-the-art model evaluation capacity.”
But critics argued that the AI Office, and especially the safety unit, needs more staff, and especially more gifted coders in its ranks, as the amount of advanced AI models is set to explode and the models themselves will become ever more capable.
Brando Benifei, an Italian social-democrat lawmaker with the European Parliament, called on the Commission to give “the AI Office more staff, deeper technical expertise, and a real budget to match the scale and speed of frontier AI.”
The eight AI safety groups pushed to bring the size of the safety unit to the current scale of the teams working on platform enforcement: 160 members by 2030.
Too ‘detached’ from power
Another challenge is the AI Office’s location within the Commission’s tech department.
Officials must navigate multiple layers of management before they can reach the political operatives. That’s not ideal, said one person who works closely with the AI Office’s safety unit. The person was granted anonymity because they regularly still meet with the Office.
The safety unit has “excellent people,” the person argued, but they are “too detached from where the power lies” and have to go “through different steps of hierarchy.”
That differs from the U.K., where British Prime Minister Keir Starmer has his own artificial intelligence adviser: Jade Leung, a former OpenAI lobbyist.
The U.K.’s AI Security Institute may also have an advantage over the EU’s AI Office because it doesn’t have the power to issue fines and other regulatory punishments, which could scare off companies.
The AI Office, together with national authorities, is in charge of enforcing the EU’s AI Act, which foresees fines of up to €35 million.
“Having a part of the state that isn’t regulatory to engage [with] can be very useful,” said Ciaran Martin, former head of the U.K.’s cyber agency.
The Commission has already launched a push to boost the capacity of the AI Office.
Under the EU’s efforts to simplify its AI law, launched in November last year, the office would need to hire 38 additional staff members. But those efforts are still being negotiated.
In the short term the AI Office plans to hire half a dozen more staff by the end of June, the “vast majority” of whom will be assigned to units responsible for safety, regulation and compliance, Regnier said.
Originally written by: Pieter Haeck and Sam Clark
Source: Politico
Published on: 16 April 2026
Link to original article: Anthropic’s hacking tech exposes EU AI Office weaknesses