ARI is hiring a PAC assistant - help us get the word out!
We're looking for 1+ year PAC/political fundraising and an interest in AI policy.
Apply here: ats.rippling.com/americans-fo...
ARI is hiring a PAC assistant - help us get the word out!
We're looking for 1+ year PAC/political fundraising and an interest in AI policy.
Apply here: ats.rippling.com/americans-fo...
New research shows, in some cases, AI models resist shutdown commands 97% of the time.
Findings from Palisade Research show leading models actively interfered and sabotaged mechanisms to preserve their own functionality.
Read the full paper on arXiv here β¬οΈ arxiv.org/abs/2509.14260
Selling our most advanced AI chips to China would trade away Americaβs technological edge.
Today, ARI shared a memo with the Trump Administration, warning against Blackwell exports to our primary geopolitical competitor.
Full memo: ari.us/b30memo
AI innovation requires a few key ingredients: compute, data, and energy. China leads in two, but the U.S. still dominates in compute.
Thatβs why strong chip export controls matter. We can't give away a critical US advantage.
Brad Carson on the Trajectory with Daniel Faggella
Tomorrow, Bruce Wittmann from Microsoft joins ARIβs panel on AI and biosecurity to discuss new research revealing loopholes in biosecurity screening and how to close them.
𧬠Register now β ari.us/biosecurity
This Wednesday, @deanwb.bsky.social joins a panel of experts at ARIβs event on AI and biosecurity to discuss how policymakers can close the gap between AIβs protein design capabilities and current safeguards.
𧬠Register here β ari.us/biosecurity
New @ifp.bsky.social research shows exporting B30A chips to China would erase a key U.S. advantage.
Our 31x compute lead could shrink below 4x, or even flip in Chinaβs favor.
Selling B30s means giving away the edge that keeps America ahead.
π ifp.org/the-b30a-dec...
AI chip security is a cornerstone of national security.
The GAIN AI Act would require chipmakers to serve US buyers before exporting to countries of concern.
Itβs a measured, market-based safeguard that protects US innovation and keeps American firms from falling behind.
The revised GAIN AI Act is earning support, even from the industry, including Microsoft.
Microsoftβs policy lead called the proposal βreally positive,β saying it keeps advanced compute in the U.S. while empowering allies to build on American innovation.
New bill text, same mission: The GAIN AI Act would ensure US companies have access to advanced AI chips before theyβre sold to countries of concern.
ARIβs Mitch Kominsky discusses the bill's path forward with @bgov.com. Important progress towards getting this passed in the NDAA.
Great to see members of the ARI team join leaders from tech, policy, and innovation at this year's tech prom!
Huge thanks to @cdt.org for hosting!
Next week: ARI hosts a virtual panel on AI and biosecurity.
New research found that AI-created loopholes let harmful synthetic proteins slip past screening tools β a serious national security risk.
π Oct 29 | 3β4PM ET
π RSVP: ari.us/biosecurity
NEW: State lawmakers are urging Congress NOT to preempt AI laws.
@ncslorg.bsky.social's new bipartisan letter asks Congress not to undermine state AI safeguards, warning that doing so would impair protections for children, jobs, privacy and safety.
Full letter here β
www.ncsl.org/resources/de...
AI is advancing faster than biosecurity practices.
Next week, join ARI and leading experts, including an author of Microsoft's latest study, to discuss the widening gap between AIβs bio capabilities and security practices.
π Oct 29 | 3β4PM ET
π RSVP: ari.us/biosecurity
OpenAI is facing accusations of leading an intimidation campaign: using subpoenas to intimidate nonprofits advocating for better AI safeguards.
Seven orgs say they were targeted without cause. Read more in the latest issue of The Output.
theaioutput.substack.com/p/openai-sub...
AI law preemption is the sequel to Section 230 that nobody asked for.
In a new op-ed for @techpolicypress.bsky.social, Brad Carson writes that preemption doubles down on the same old mistakes, shielding Big Tech from accountability as new online harms emerge:
www.techpolicy.press/a-new-sectio...
ARI President Brad Carson spoke at @far.ai's Technical Innovations for AI policy conference on 14 unanswered questions on the development of AI.
Check out his full remarks: www.youtube.com/watch?v=j1DC...
For preemption to work, Congress must pass meaningful guardrails at the federal level, replacing the state laws they preempt.
A no-rules approach to AI sets America back.
Full article: subscriber.politicopro.com/newsletter/2...
New issue of The Output is out!
We cover the moratorium, the latest on fair use cases, and NY's AI safety battle.
Read and subscribe: theaioutput.substack.com/p/the-morato...
The biggest takeaway from the defeat of the AI law moratorium: Preemption of state laws without a serious federal replacement is a political non-starter.
NYT's DealBook has the story:
messaging-custom-newsletters.nytimes.com/dynamic/rend...
Microsoftβs chief scientist speaks out against the 10-year ban on state laws regulating AI, saying the moratorium βwill slow the development of the frontier technology rather than accelerate it.β
Itβs time to remove the AI law ban from the budget.
Storyβ¬οΈ
www.theguardian.com/technology/2...
Join us TOMORROW at 1 PM for a virtual press conference. β¬οΈ
Republican state lawmakers are putting the pressure on: itβs time for Congress to take the AI state law ban out of the Big Beautiful Bill.
Register: zoom.us/webinar/regi...
ICYMI: ARIβs research explores the impact of the 10-year ban on state AI laws on childrenβs online safety, consumer protections, and more.
The 10-year moratorium risks freezing tech safeguards across the country.
Full paper: ari.us/wp-content/u...
βDonβt take legislation as anti-innovation. Across the nation, and across the aisle, state lawmakers are enthusiastic about AIβs potential and want the United States to lead the world in this technology."- @ncslorg.bsky.social op-ed on the 10-year state AI law ban.
www.governing.com/artificial-i...
βWhat happens when AI systems learn how to cheat?β
Our latest PolicyByte breaks down the risks of reward hacking and steps we can take to ensure reliability in AI systems.
ari.us/policy-bytes...
ARI President Brad Carson talks with Bloomberg TV about the need for hardened security at frontier AI labs in the face of foreign threats.
The Advanced AI Security Readiness Act would take a first step towards tackling this challenge by creating an "AI Security Playbook."
The newest edition of The Output is out! We dive into the moratorium debate, Metaβs Scale investment, and CAISIβs rebrand.
Subscribe and get a round-up of the biggest news in the AI policy space: substack.com/home/post/p-...
Securing U.S. leadership in AI means securing Americaβs leading AI labs.
ARI is endorsing the Advanced AI Security Readiness Act, which tasks the NSA with identifying and addressing vulnerabilities in U.S. AI infrastructure.
Our statement: ari.us/new-bill-wou...
Talking with @cbnnews.bsky.social, @chrismacknz.bsky.social highlights how the 10-year moratorium on state AI laws would freeze existing laws and prevent policy responses to new risks that emerge.
Opposition to the 10-year moratorium on state laws regulating AI keeps growing.
To keep you up-to-date with the lawmakers, leaders, and advocacy groups speaking out, weβre launching NoAILawBan.org. Visit for a quick guide to whoβs opposing the AI law ban.