Seriously Risky Business Newsletter
May 07, 2026
Srsly Risky Biz: After Mythos, US Government Weighs AI Model Regulation
Written by
Policy & Intelligence
Your weekly dose of Seriously Risky Business news is written by Tom Uren and edited by Amberleigh Jack. This week's edition is sponsored by PortSwigger.
You can hear a podcast discussion of this newsletter by searching for "Risky Business News" in your podcatcher or subscribing via this RSS feed.


The Trump administration is considering applying stricter oversight to American AI models due to their cyber security impact. However, before pulling the trigger on strict and inflexible regulation, we believe the government should spend a little time watching and learning.
This apparent shift from the administration's light touch AI regulation has reportedly been driven by concern about the hacking capabilities of frontier models.
According to the New York Times, the administration wants to establish a group made up of tech executives and government officials to propose oversight procedures for the roll out of all new AI models. The group is likely to consider a range of options, including a formal government review process.
Globally, cyber security authorities are already bracing for the impact of increasingly capable models. Last week, the UK's NCSC CTO Ollie Whitehouse warned of an impending "vulnerability patch wave" as AI finds and fixes vulnerabilities that have accrued over decades.
CISA, meanwhile, is considering imposing shorter patch deadlines for US government systems. The default patch deadline for bugs that are being actively exploited could be reduced from three weeks to as little as three days. For what it's worth we think patching faster is a fine idea, but we also think organisations will get more bang for their buck by focusing on security fundamentals that help to mitigate all bugs. You can't patch your way out of "the bugpocalypse".
While Whitehouse describes a "wave" of patches, the reality is it will be waves, plural. Every time new models are released they will rapidly be used to discover more bugs in commonly used software with basic prompts. If that capability is available to all and sundry, it will cause problems. Anthropic and OpenAI are seemingly aware of this and have taken different approaches to mitigating these risks.
Anthropic released its latest model, Mythos Preview, to a limited number of trusted organisations, in its Project Glasswing initiative. The idea was to give these organisations a head start in finding and patching vulnerabilities before the model was rolled out more broadly. Mozilla reported that it had fixed 271 Firefox vulnerabilities after being granted access.
Unlike Anthropic, which restricted access to Mythos to a select few, OpenAI released its latest model GPT5.5 to all customers who wanted it and instead relied on model safeguards to prevent it from dangerously spilling 0day exploits. Users in cybersecurity roles who ran into these safeguards could verify their identity with OpenAI to get them dialled back.
This Trusted Access for Cyber program provides users "reduced friction around safeguards" when individuals and enterprises are able to satisfy its Know-Your-Customer and trust requirements. Customers who prove that they are legitimate cyber defenders can also get access to more niche versions of OpenAI's models that have more advanced cyber capabilities coupled with fewer restrictions.
These are very different release approaches for models with similar capabilities. Anthropic's approach is very cautious, OpenAI's is more open.
The White House response so far has been to crack down on the cautious one. The Wall Street Journal reported late last week that the White House was opposing Anthropic's plan to expand access to Mythos to another 70 companies. There has been no such pronouncements about OpenAI's GPT5.5, which is funny when you consider that it is widely available and the UK's AI Security Institute (AISI) found that it might actually be better than Mythos at cyber security tasks.
We expect that the administration will eventually zero in on a position where AI companies have consistent release policies, rather than allowing each to make up their own rules.
It’s not just frontier models that present risks here, though. Last week Niels Provos, a former Google Distinguished Engineer, wrote a blog post titled "Finding Zero-Days With Any Model". He used an orchestration harness and older commercial and open weight models to independently rediscover bugs found by Mythos.
Provos proved that the vulnerability discovery gap between Mythos and GPT5.5 being driven by a novice and older models being driven by experts is not as big as it seems. Thus, holding back Mythos or GPT5.5 to a White House-approved circle of trust for 90 days probably won't achieve as much as we'd like.
A better approach may be for the government to wait and see so that it can understand what is happening and can make informed decisions further down the track. How many 0day vulnerabilities does each new model release shake out? What are their CVSS scores? Is that number trending up or down over time? Are vendors actually patching the bugs? What is the state of vulnerability discovery using older and open weight models? Are the frontier labs acting responsibly?
The broader point here is that it is pretty clear that the rise of powerful AI cyber capabilities is a generational shift that policymakers don't yet understand how to respond to. Attempting to stagger the access to the technology's bleeding edge is intuitively attractive, but there are good reasons to believe that it will have little impact.
Australia Launches Hamstrung Cyber Review Board
This week Australia's Minister for Cyber Security, Tony Burke, announced the establishment of a Cyber Incident Review Board. This is timely, especially given the rise of AI, but the board's impact will be limited by its approach to liability.
The Minister's press release says the board will conduct no-fault, post-incident reviews of significant cyber security incidents in Australia. The intent is to "deliver actionable recommendations to government and industry to help prevent, detect, respond to, and minimise the impact of similar incidents in the future".
The implementing legislation, the Cyber Security Act 2024, says that one criteria for reviewing an incident should be if it "involved novel or complex methods or technologies, an understanding of which will significantly improve Australia’s preparedness, resilience, or response to cyber security incidents of a similar nature".
Timely, given that the rise of AI will undoubtedly result in novel types of attacks.
Unfortunately the legislation states that the board must not apportion blame or provide the means to determine liability. This blunts the board's ability to point out that a serious security incident may have been downstream of a victim organisation making poor decisions and failing to prioritise security.
See, for example, the US Cyber Safety Review Board excoriating Microsoft for a "cascade of security failures". That report didn't pull its punches and shortly after it was released Microsoft CEO Satya Nadella issued an all-hands memo declaring that security was the company's top priority. It also served as a warning to other companies that security was something they should care about.
The Australian version of that CSRB report? It can't apportion blame, so we’re guessing it would say something like, “Someone, somewhere made mistakes”.
By contrast, the legislation for Australia's transport safety investigations can apportion blame, while still maintaining liability protections: "A report… is not admissible in evidence in any civil or criminal proceedings."
This small change makes a huge difference to the ability of the Board to drive change and actually achieve its stated goals. We're pretty sure there will be a spate of upcoming incidents where the root cause is less technical and more greedy-executives-doing-dumb-stuff-with-AI without a passing thought about security. It would be a shame if the Board is not able to state that plainly.
Watch James Wilson and Tom Uren discuss this edition of the newsletter:
Three Reasons to Be Cheerful This Week:
- US and China collaborate on Dubai scam center takedown: The coordinated takedown led to more than 270 arrests and dismantled nine scam centers that were being used in cryptocurrency investment scams. The takedowns involved what the Department of Justice called "unprecedented cooperation" between the FBI, the Chinese Ministry of Public Security and the Dubai Police. The Record has further coverage.
- Elections Canada watermarked electoral lists releases: Ars Technica covers how Elections Canada was able to trace the provenance of a leaked electoral list because it inserts bogus data as a form of fingerprinting into the lists it distributes to legitimate recipients. We'd call this a type of watermark, but Ars uses the term 'canary trap' and we like the clever use of a simple technique.
- FTC bans data broker sensitive data sales: The data broker Kochava and the US Federal Trade Commission have agreed to settle a complaint in which the FTC alleged that Kochava was selling the precise geolocation of customers without consent. This included sensitive locations including places of worship and health care clinics. There was no fine, but Kochava has agreed to stop sharing or selling sensitive location data. The Record has further coverage.
Sponsor Section
In this Risky Business sponsor interview, James Wilson talks with James Kettle and Daf Stuttard from PortSwigger about the incredible research James will unveil at Black Hat US this July, and how that research will be productised into Burp Suite.

Shorts
Google's Vulnerability Reward Programs Change Because AI
Google has announced changes to its vulnerability reward programs that reflect the impact AI is having on bug discovery and remediation. The program now rewards reports that are impactful and also hard for automated AI tooling to find. For example, a full chain Pixel Titan M2 compromise with persistence pays up to USD$1.5 million. But other rewards are going to be reduced. Also, bug reports now don't really need words, as much as some sort of "concrete proof of exploitability" and, ideally, proposed patches.
Risky Biz Talks
You can find the audio edition of this newsletter and other fine podcasts and interviews in the Risky Biz News feed (RSS, iTunes or Spotify).
In our last "Between Two Nerds" discussion Tom Uren and The Grugq discuss the breakdown of cyber norms. What would have been an unthinkable cyber operation just a few years ago is now a regular occurrence.

Or watch it on YouTube!
From Risky Bulletin:
Extremely targeted supply chain attack hits DAEMON Tools: A supply chain attack is currently ongoing on the website of DAEMON Tools, a popular app for burning CDs and DVDs, and for creating bootable USB drives.
DAEMON Tools installers have been shipping with a backdoor since at least April 8. The installers were signed with the vendor's legitimate certificate, suggesting deep access to the AVB Disc Soft's internal network and processes.
The backdoor triggers every time the user runs their PC, collects data about the host, and uploads it to a remote server. Collected data includes the machine's MAC address, hostname, system locale, DNS domain name, and a list of active processes and installed software.
[more on Risky Bulletin]
DigiCert hacked with a malicious screensaver file: A threat actor gained access to DigiCert's backend and stole 27 code signing certificates they later used to sign malware.
The incident took place last month and was traced back to a social engineering attack that successfully compromised two employees of DigiCert's tech support team.
According to DigiCert's post-mortem, the attacker posed as a customer and tricked the tech support staff into running an SCR file, a format used to install and configure Windows screensavers.
[more on Risky Bulletin]
The mysterious hack of Moldova's healthcare database: A mysterious hacking group has stolen the personal and financial information of Moldovan citizens from the country's national healthcare database.
Moldova's national health insurance agency, CNAM, confirmed that data was stolen but denied initial news reports that almost a third of the database had been destroyed in the attack.
Ion Vintilă, an adjunct director for Moldova's Cybersecurity Agency, had told reporters in a taped interview that almost 30% of the agency's data was impacted in the incident, but didn't specify in what manner.
[more on Risky Bulletin]