LogoLogo

Podcasts

Newsletters

Videos

Catalog

People

About

Search

Seriously Risky Business Newsletter

May 14, 2026

Srsly Risky Biz: The AI Regulation Knife Fight

Written by

Tom Uren
Tom Uren

Policy & Intelligence

Your weekly dose of Seriously Risky Business news is written by Tom Uren and edited by Patrick Gray. This week's edition is sponsored by Knocknoc.

You can hear a podcast discussion of this newsletter by searching for "Risky Business News" in your podcatcher or subscribing via this RSS feed.

Photo by Maxime Gilbert on Unsplash

The Trump administration is grappling with whether to give US intelligence agencies a bigger role in the assessment of new AI models, according to The Washington Post.

Ideas about AI regulation within the administration appear to be in a state of flux. Politico reported on Tuesday last week the administration was considering a government vetting process before new models were released. By Thursday, the administration was distancing itself from tighter regulation, and by Friday a lobbyist told Politico that "there is no clarity" because "different factions within the White House have different views about what should happen". 

Amongst that chaos, the National Cyber Director pitched a center within the Office of the Director of National Intelligence for the evaluation of new AI models. The intelligence community has deep expertise in cyber security and AI and their associated national security risks and benefits, so that does make a lot of sense.

Meanwhile, the Department of Commerce is already home to the Center for AI Standards and Innovation (CAISI). The Center was spun up as the US AI Safety Institute in 2024 before being renamed last year, and it has a head start building the expertise and infrastructure in testing and evaluating models. 

But on Friday CAISI took down a web site that announced new voluntary agreements with Google, Microsoft and xAI for it to test models before their release because of what the Washington Post describes as "sensitivity" from the White House. The Post's sources went on to describe the conflict between Commerce and national security aides as a "knife fight". 

Of course, AI is about more than just cyber security and potentially has serious implications across the entire economy. There is no single government agency that contains all the expertise that is needed to review models. For example, AI companies are concerned about the biological and chemical capabilities of models and whether they could be used to assist in the creation of weapons of mass destruction. These are serious concerns but not exactly the expertise that NSA holds.

Despite that, the intelligence community should obviously be involved in assessing models. Where the specialist expertise exists in that community, the US government should draw on it. But the intelligence community shouldn't own model assessment entirely. AI's implications are simply too wide-ranging. 

LEO Satellite Constellations Are a Must Have

Russian firm Bureau 1440 has begun launching the country's answer to Starlink, a new Low Earth Orbit (LEO) satellite constellation called Rassvet, reports Wired. 

Rassvet, Russian for 'daybreak', will not match Starlink in many respects, but the Russian Russian government is investing billions of dollars to develop it as a sovereign capability. Rassvet's total funding is USD$5.7 billion, with the Russian Ministry of Communications reportedly providing about $1.3 billion. By comparison, Amazon has spent more than USD$10 billion on its satellite constellation, Amazon Leo, which has not yet launched its commercial service.

The planned size of Rassvet's satellite constellation is far smaller than either Leo or Starlink. Bureau 1440 is aiming for around 900 satellites by 2035, with 16 launched so far. By contrast, Amazon says it will launch three thousand by 2029 for Leo and Starlink currently has nine thousand active satellites.

Using thousands of satellites to provide global coverage is great for providing fast internet broadband worldwide, but for Russia having a rudimentary sovereign, domestically focussed capability may be more important than having a great one.

Even though Starlink was notionally denied to Russian forces because of geoblocks and sanctions, its military was able to use it throughout the war in Ukraine until SpaceX began allowlisting Ukrainian terminals earlier this year. The countermeasure was prompted by Russia using Starlink terminals to control long-range drone strikes deep within Ukrainian territory. This allowlisting control had an immediate impact on Russian effectiveness and left its military searching for ground-based alternatives such as long-range Wifi bridges.  

Even a relatively small constellation of LEO satellites could provide some military users in occupied Ukraine with high-bandwidth low-latency links, albeit intermittently throughout the day. Bureau 1440's already achieves 48 Mbit/s downlinks and 12 Mbit/s uplinks at around 40 milliseconds latency. Specialist space publication KeepTrack says this is roughly equivalent to early Starlink capacity before its constellation was built out, suggesting that the underlying communications technology is sound.  

Beyond potential military application in Ukraine, Rassvet is optimised for high-latitude coverage and will do a far better job than Starlink of servicing northern Russia. 

It seems that the biggest hurdle for Rassvet, however, is Russia's limited launch capacity, which is running at just 20 launches per year for all customers. 

China, by contrast, is launching two satellite constellations: Guowang, 'State Network', is a state-backed initiative and Qianfan or 'Thousand Sails' is a commercial-first system. Each constellation is planned to consist of almost 13,000 satellites and at the end of 2025 about 400 satellites were in orbit between the two systems. 

That's a lot of satellites still to go, but China isn't launch constrained in the way that Russia is. This year, it is on track for 140-odd launches and by March it had successfully launched 34 orbital missions. This compares to the US's 29, of which SpaceX alone was responsible for 22.

To date, China has used single-use rockets to build out its launch capability, but its companies are working on reusable rockets now. Its single-use launches aren't as reliable or as safe from a space debris perspective as American and European ones, but they're getting the job done until reusable rockets come online.

The European Union, meanwhile, is developing IRIS, a smaller constellation that occupies a combination of low and medium earth orbits. The idea is it will achieve good coverage and speeds at lower costs, with the trade off being less overall capacity and higher latency at times. 

IRIS is under development, but in the short term the Anglo-French Eutelsat OneWeb LEO constellation is a sovereign option for European governments that are looking for space internet. The OneWeb network currently has 630 satellites in orbit, making it the second-most mature LEO satellite constellation after Starlink. OneWeb will eventually be rolled into IRIS as the foundational LEO portion of the network.   

Compared to Starlink, OneWeb is more expensive and slower, but perhaps that's just the price of sovereign capability. In 2025 the German government revealed it had been funding Ukrainian access to OneWeb. At that time, even Eutelsat's chief executive Eva Berneke told Reuters that access to OneWeb was more about having a plan B than replacing Starlink. The shakiness of the transatlantic alliance has shown us that having a plan B is important.

The funny thing about all this is that the LEO space internet race wasn't a geopolitical competition kicked off by a Presidential pronouncement. It was kicked off by Elon Musk wanting to launch lots and lots of rockets to occupy Mars. Now everyone needs their own constellation. It's a funny old world.

AI Is "Industrialising" Cyber Threats

Google's latest AI threat tracker report details how threat actors are industrialising and scaling anonymous access to premium tier models to enable large-scale misuse.

In other words, even though AI companies try to prevent abuse, threat actors have developed an entire ecosystem to bypass safeguards and take advantage of free trials. 

In our view, this is the most significant section of Google's report from a national security perspective. 

Both state-sponsored and cyber crime groups are using "emerging ecosystem of custom middleware, proxy relays, and automated registration pipelines designed to bypass safety guardrails and billing constraints". The idea is that by using these techniques they can maintain anonymous access to premium model tiers to "effectively industrialise their adversarial workflows". 

That doesn't sound good at all! To add insult to injury, the threat actors are actually subsidised as their abuse techniques involve cycling through new accounts. They're continually taking advantage of new free trial periods! 

Google isn't explicit about how effective these techniques are, but it concerns us that advanced adversaries are successfully using the latest models for malicious purposes. This could include directly malicious uses such as enhancing cyber attacks, or could involve distilling advanced models. Either way, not great!

This is an area where the government should bring the AI companies together to try to understand the extent and success of these safety guardrail bypasses, develop best practices and countermeasures and perhaps even run disruption operations. 

Without government involvement we are a bit worried that companies will portray AI guardrails as strong and robust while pushing models out to collect more revenue. OpenAI's Trusted Access approach for allowing access to GPT5.5, its latest model, sounds pretty good. But is it effective against the guardrail bypass ecosystem that Google describes?

Google's report also describes all the other ways that threat actors are experimenting with AI and doing dumb or clever stuff. Experimenting is to be expected. But bypassing guardrails at scale? That needs a closer look.  

Watch James Wilson and Tom Uren discuss this edition of the newsletter:

Three Reasons to Be Cheerful This Week:

  1. Linux kernel killswitch proposed: This proposed new security feature would allow admins to disable vulnerable kernel functions until patches are available. It comes in the wake of two local privilege escalation vulnerabilities in the past weeks, Copy Fail and Dirty Frag. We don't know if this feature will make it into Linux, but we like that it has the potential to mitigate many vulnerabilities. 
  2. Intrusion Logging for Android: Google announced a range of improved security and privacy protections for Android, including what it is calling "Intrusion Logging". This turns on security logs that are designed to provide forensic data to analyse suspected device compromise. Amnesty International's security lab has published a technical briefing on the feature. 
  3. Signal responds to phishing: Signal has added additional security warnings and in-app confirmations to try to combat phishing on the app. These are mostly educational warnings such as pointing out that a new Signal contact's name is not verified and whether you share any groups in common.   

Sponsor Section

In this Risky Business sponsor interview, Patrick Gray chats with Knocknoc CEO Adam Pointon about their Greynoise integration.

Shorts

How Mozilla Found 270 Bugs in Firefox

Mozilla has published a blog post describing how it found the 270-odd bugs it reportedly fixed in Firefox with the help of Anthropic's Mythos Preview and other AI models. 

The key message Mozilla emphasises is that its harnesses were extremely important. They used them around models to steer, scale and stack them to "generate large amounts of signal and filter out the noise". 

Mozilla has been using these techniques for a while and says that it was able to identify an "impressive amount of previously-unknown vulnerabilities which required complex reasoning over multiprocess browser engine code" even using older models like Claude Opus 4.6. 

Using these techniques, it says AI-generated security bug reports are now "very good". 

This aligns with what former Google Distinguished Engineer Niels Provos told Risky Business Enterprise editor James Wilson in this interview. The models you're using aren't necessarily the most important factor, it's how you use them. 

Risky Biz Talks

You can find the audio edition of this newsletter and other fine podcasts and interviews in the Risky Biz News feed (RSS, iTunes or Spotify).  

In our last "Between Two Nerds" discussion Tom Uren and The Grugq discuss why it makes even more sense for criminal organisations to adopt AI as compared to regular businesses.

Or watch it on YouTube!

From Risky Bulletin:

RubyGems disables sign-ups after attack on staff: The RubyGems package repository has disabled new user sign-ups after a malicious attack on Monday targeted its engineers and staff.

Hundreds of malicious packages were published on Monday and then again on Tuesday.

The packages contained malicious code aimed at RubyGems developers. The code tried to execute cross-site scripting attacks and steal data from their systems.

[more on Risky Bulletin]

FCC relaxes foreign router ban to allow for security updates: The US Federal Communications Commission has updated its ban on foreign-made routers to allow vendors to ship security updates for a longer period of time.

The agency banned the sale of foreign routers in March, but allowed companies to ship security updates for one more year until March 2027.

The FCC says that based on comments from the government and private sector it has now updated this cutoff date to January 1, 2029.

The exemption applies only to security updates that "mitigate harm to consumers" and foreign companies are not allowed to ship new features via this mechanism.

[more on Risky Bulletin]

Google patches Android remote takeover bug: This month's Android security updates carry an important patch for a critical vulnerability that can grant attackers remote access to an Android smartphone or smart device.

Tracked as CVE-2026-0073, the bug allows attackers to bypass authentication in the Android remote debugging service ADB.

Successful exploitation opens a remote shell on a device where the ADB service was enabled. ADB is disabled by default in the standard Android OS release, but may be enabled and left exposed by accident by some OEM (device makers) during factory testing, which has happened a lot over the past years.

The issue impacts all devices running Android 11 or later, which is every Android version released since September 2020.

[more on Risky Bulletin]

Recent Newsletters

  • Srsly Risky Biz: The AI Regulation Knife Fight
  • Risky Bulletin: RubyGems disables sign-ups after attack on staff
  • Risky Bulletin: FCC relaxes foreign router ban to allow for security updates
  • Risky Bulletin: Google patches Android remote takeover bug
  • Srsly Risky Biz: After Mythos, US Government Weighs AI Model Regulation

Recent Videos

  • Risky Business (837): GitHub Actions footgun claims TanStack
  • Between Two Nerds: The AI-first crime gang
  • Mythos smythos! How to find 0day with lesser models
  • Srsly Risky Biz: After Mythos, US government weighs AI regulation
  • Risky Business (836): You can't patch the bugpocalypse

Recent Podcasts

  • Srsly Risky Biz: The AI Regulation Knife Fight
  • Risky Bulletin: Damaging worm rips through npm ecosystem
  • Risky Business #837 -- GitHub Actions footgun claims TanStack
  • What a great agentic AI deployment plan looks like
  • Between Two Nerds: The AI-first crime gang
Risky Business Media

Risky Business

  • Home
  • Podcasts
  • Newsletters
  • Video
  • Sitemap

Risky Business Media

  • About
  • People
  • Advertising
  • Sponsor Enquiries: sales@risky.biz

Risky Connections

  • Risky Business on Apple Podcasts
  • Risky Business on Spotify
  • Risky Bulletin on Apple Podcasts
  • Risky Bulletin on Spotify
  • Risky Business Features on Apple Podcasts
  • Risky Business Features on Spotify
  • Risky Business Stories on Apple Podcasts
  • Risky Business Stories on Spotify
  • YouTube
  • LinkedIn

Risky Contacts

Risky Business Media Pty Ltd
PO Box 774
Byron Bay NSW 2481
General Email: editorial@risky.biz

© Risky Business Media 2007–2026. All rights reserved.
ABN 73 618 465 517