How worried should we be about AI biorisk?
The barriers to bioattacks are hard to identify β and it's even harder to know whether AI is reducing them
This is an excellent piece discussing AI's impact on biorisk, offering a great distillation of key topics. Definitely worth a read!
(And I'm not surprised it's great, given that @csetgeorgetown.bsky.social's own @stephbatalis.bsky.social is quoted!)
www.transformernews.ai/p/ai-biorisk...
27.02.2026 22:33
π 2
π 2
π¬ 0
π 0
Furthermore, state-level AI laws can themselves support and enhance innovation (as we have argued previously)!
thehill.com/opinion/tech...
30.01.2026 15:53
π 1
π 0
π¬ 0
π 0
The new order could similarly face major challenges: risking legal backlash, bipartisan resistance, and public distrust. Each of these factors could make AI innovation harder.
30.01.2026 15:53
π 1
π 0
π¬ 1
π 0
This order was the culmination of multiple attempts to impose a moratorium on state-level AI regulation. Prior efforts have faced surprising hurdles.
30.01.2026 15:53
π 1
π 0
π¬ 1
π 0
Late last year, the Trump administration put out an Executive Order aiming to preempt states' ability to regulate AI systems and set the stage to challenge the constitutionality of state AI laws.
30.01.2026 15:53
π 1
π 0
π¬ 1
π 0
The Complicated Politics of Trumpβs New AI Executive Order
The administrationβs attempt to suppress state AI regulationΒ risks legal backlash, bipartisan resistance, and public distrust, undermining the innovation it seeks to advance.Β The Trump administration...
Excited to share a new op-ed by @minanrn.bsky.social, @jessicaji.bsky.social, and myself for the National Interest!
The administration's new AI Executive Order, aiming to suppress state-level AI regulation, risks undermining the innovation it seeks to advance.
nationalinterest.org/blog/techlan...
30.01.2026 15:53
π 3
π 2
π¬ 1
π 0
With the right mix of evidence-based tools working together, we can create a flexible, layered, and effective safety net for biosecurity governance.
10.12.2025 19:48
π 0
π 0
π¬ 0
π 0
It would take too long to wait for the perfect policy before taking action.
Instead, we should implement good safeguards as we go, being careful to avoid allowing the pursuit of perfect interventions to prevent the adoption of well-designed, practical ones.
10.12.2025 19:48
π 0
π 0
π¬ 1
π 0
In it, we argue that the right way forward for biosecurity will involve using multiple tools from our toolbox of policy levers in tandem with one another.
Each biosecurity intervention targets a specific risk, and they're often most effective when narrowly scoped.
10.12.2025 19:48
π 0
π 0
π¬ 1
π 0
After analyzing these proposals, we argue:
1. Policymakers can use this approach to understand disagreements and shared views of proposal creators more precisely.
2. They can take action in an uncertain and rapidly changing environment by addressing common assumptions across governance proposals.
12.11.2025 20:34
π 1
π 0
π¬ 0
π 0
Policymakers can use these assumptions, some unique and some shared, to better understand what's possible and more effectively build AI governance infrastructure.
To show this in action, our report analyzes five AI governance proposals, from different kinds of organizations, as case studies.
12.11.2025 20:34
π 1
π 0
π¬ 1
π 0
We suggest breaking down AI governance proposals into their component parts. What do they aim to govern, and why? Who should do the work, and how?
Answering these questions will surface the foundational assumptions that make the proposals tick.
12.11.2025 20:34
π 1
π 0
π¬ 1
π 0
With AI tech continuing to develop, many relevant organizations have written proposals about how to govern AI.
With so many out there, how should people, from policymakers to other interested parties, understand and evaluate them?
This report proposes an analytical method to achieve that.
12.11.2025 20:34
π 1
π 0
π¬ 1
π 0
How to stop bioterrorists from buying dangerous DNA
The companies that sell synthesized DNA to scientists need to screen their customers, lest dangerous sequences for pathogens or toxins fall into the wrong hands.
Focusing on bio, one provision is a federal funding requirement for DNA synthesis screening- a useful tool in the toolbox for limiting biological risk.
Check out @stephbatalis.bsky.social and I's piece breaking down the kind of decisions screeners have to make: thebulletin.org/2025/04/how-...
25.07.2025 14:26
π 2
π 1
π¬ 1
π 0
More on the recent AI Action Plan! @csetgeorgetown.bsky.social work is very relevant.
25.07.2025 14:26
π 2
π 0
π¬ 1
π 0
Ultimately, though, a chilling effect on state-driven AI legislation could severely harm innovation by reducing foundational AI governance infrastructure.
The Action Plan's implementation and approach remain to be seen, but it should be careful not to nip useful state regulation in the bud.
24.07.2025 18:55
π 1
π 0
π¬ 0
π 0
The plan does clarify that restrictions shouldn't interfere with prudent state laws that don't harm innovation.
And it's true that a complex thicket of onerous state laws governing AI could make it harder for AI companies to comply, harming innovation.
24.07.2025 18:55
π 1
π 0
π¬ 1
π 0
States are better-positioned to pass these laws than the federal government in the current environment.
They can also serve as a sandbox for experimentation and debate, allowing for innovation in governance approaches. The best governance approaches can inspire other states to follow suit.
24.07.2025 18:55
π 1
π 0
π¬ 1
π 0
State laws provide a critical avenue for building governance infrastructure: things like workforce capacity, information-sharing regimes, standardized protocols, incident reporting, etc.
These help provide clarity for companies and are crucial for innovation.
24.07.2025 18:55
π 1
π 0
π¬ 1
π 0
A recent @thehill.com piece by @minanrn.bsky.social, @jessicaji.bsky.social, and myself introduces the topic of governance infrastructure.
It discusses the recent proposed ban on state AI regulation-which would have gone much further and, thankfully, did not pass.
thehill.com/opinion/tech...
24.07.2025 18:55
π 2
π 0
π¬ 1
π 0
Yesterday's new AI Action Plan has a lot worth discussing!
One interesting aspect is its statement that the federal government should withhold AI-related funding from states with "burdensome AI regulations."
This could be cause for concern.
24.07.2025 18:55
π 6
π 3
π¬ 1
π 0
Really timely breakdown of today's big AI Action Plan release, by @csetgeorgetown.bsky.social's own @alexfriedland.bsky.social! Give it a read, I think it's really useful.
23.07.2025 21:11
π 4
π 2
π¬ 0
π 0
Factors like robust third-party auditing, strong information-sharing incentives, and shared resources and workforce development enhance, rather than reduce, innovation.
As such, we argue that the proposed moratorium would be counterproductive, undermining the very goals it aims to achieve.
18.06.2025 18:52
π 1
π 0
π¬ 0
π 0
These debates are worth having, but miss a crucial factor: AI governance infrastructure, which states are best-positioned to build.
This infrastructure helps achieve the moratorium's stated goals. It helps developers innovate, strengthens consumer trust, and preserves U.S. national security.
18.06.2025 18:52
π 1
π 0
π¬ 1
π 0
Proponents of this plan argue that reducing strenuous regulations will speed up innovation, and that the federal government should lead in regulating AI anyway.
Opponents cite congressional gridlock, partisanship, and lack of meaningful tech regulation, as proof state laws are needed.
18.06.2025 18:52
π 1
π 0
π¬ 1
π 0
The recent reconciliation bill, which passed the House and will face a Senate vote soon, would place a 10-year moratorium on state-level AI regulation.
Whether this is a good idea has been hotly debated.
18.06.2025 18:52
π 1
π 0
π¬ 1
π 0
Banning state-level AI regulation is a bad idea!
One crucial reason is that states play a critical role in building AI governance infrastructure.
Check out this new op-ed by @jessicaji.bsky.social, myself, and @minanrn.bsky.social on this topic!
thehill.com/opinion/tech...
18.06.2025 18:52
π 8
π 5
π¬ 1
π 0