Governments can’t translate “fair” or “safe” into technical specs fast enough. But leaving details to industry means the public loses its say. Regulatory markets close both gaps: governments set outcomes, private regulators compete to achieve them.
@ghadfield
Economist and legal scholar turned AI researcher focused on AI alignment and governance. Prof of government and policy and computer science at Johns Hopkins where I run the Normativity Lab. Recruiting CS postdocs and PhD students. gillianhadfield.org
Governments can’t translate “fair” or “safe” into technical specs fast enough. But leaving details to industry means the public loses its say. Regulatory markets close both gaps: governments set outcomes, private regulators compete to achieve them.
AI systems are quickly becoming embedded throughout the economy. But we have almost none of the regulatory tools, regulatory markets among them, to manage them. Here's what I think we should do about it: www.americanbar.org/groups/scien...
“The most practical governance framework currently in circulation.” That’s Forbes on the Independent Verification Organization model Fathom and I have been developing. Legislation takes years; IVOs move at the pace of innovation.
"Why not work on what kind of new governance is needed to ensure secure, reliable, predictable use of all frontier models, from all companies?"
In London today and tomorrow for the Alignment Workshop organized by FAR.AI. Keynoting alongside Rohin Shah and Allan Dafoe. I look forward to seeing everyone in attendance! www.far.ai/events/event...
The 2026 AI Safety Report's biggest finding isn't the risks it catalogs. It's the evidence gap. We're trying to build AI governance with almost no science underneath. Massive investment in the research regulatory systems depend on is overdue. internationalaisafetyreport.org
Screenshot of a LinkedIn post by Jack Shanahan (Retired USAF; Project Maven/DoD JAIC; NCSI MIS; SCSP Defense Partnership), posted 3 hours ago. In the post, Shanahan weighs in on the Anthropic-Pentagon dispute, noting that despite his Project Maven background, he's sympathetic to Anthropic's position. He argues no current LLM should be used in fully lethal autonomous weapons systems, calling that a reasonable red line, and opposes mass surveillance of US citizens as a second red line. He criticizes the public nature of the dispute, calls the supply chain risk designation "laughable," questions invoking the DPA against the company's will, and advocates for shared government-industry-academia governance of frontier AI models.
Why not work on new governance...
NIST just launched an AI Agent Standards Initiative for identity, security, and interoperability. AI agents are becoming economic actors with zero legal infrastructure in place. We require businesses to register to operate. Why expect less of AI agents? buff.ly/kTU2cfX
In Paris this week for IASEAI (Feb 24-26). Tuesday: panel on the International AI Safety Report. Thursday: keynote on regulatory markets, a panel on AI assurance, and a talk in Seth Lazar’s workshop on normative competence. If you’re at IASEAI, come say hello!
Congratulations to Yoshua Bengio and the 39 other experts appointed to the UN’s first Independent International Scientific Panel on AI. 117-2 in the General Assembly.
Better technology doesn’t fix broken institutions. The paper discusses regulatory markets as one path forward: instead of regulating providers directly, create a market for regulation itself. Worth a careful read. buff.ly/kbfvYqN
New in Lawfare from Justin Curl, Sayash Kapoor, & Arvind Narayanan: AI won’t automatically make legal services cheaper. I’ve been working on this for a long time, legal markets are broken because of adversarial dynamics, credence goods problems, & regulations that protect incumbents, not consumers.
Billions going into building AI, barely any into making sure it works for us. Talked with @kevintfrazier.bsky.social & Andrew Freedman about our proposal making its way through state legislatures to build a competitive market for AI oversight. New @scalinglaws.bsky.social podcast:
6/ Led by Shuhui Zhu with Yue Lin, Shriya Kaistha, Wenhao Li, Baoxiang Wang, Hongyuan Zha, and Pascal Poupart across Waterloo, Vector Institute, CUHK-Shenzhen, and Tongji. arxiv.org/abs/2602.07777
5/ We don't need AI agents that default to "nice." We need agents that understand when cooperation makes sense and when it doesn't. That takes institutional structure, not just training. Gossip turns out to be surprisingly powerful institutional structure.
4/ Some chat models did something different and arguably more troubling. They cooperated even when defection was the rational play. That looks like alignment on the surface, but it's cooperation without the reasoning to know when it should stop.
3/ The surprise: reasoning models defect every time without gossip, exactly as theory predicts. Give them reputational information and they flip to strategic cooperation. They figure out that cooperation pays when others can see what you're doing.
2/ Our new ALIGN framework gives LLM agents a protocol for sharing reputational information, and that alone sustains cooperation in decentralized systems. Agents praise cooperators, criticize defectors, and adjust their behavior based on what they hear.
1/ What makes self-interested AI agents cooperate? Not fine-tuning. Not central oversight. Gossip.
3/4 We require businesses to register before they can operate. Shouldn't we expect the same basic legal infrastructure before billions of AI agents start transacting on our behalf?
2/4 These agents aren't entering contracts yet. But AI companies are racing to build agents that can buy, sell, manage finances. When they arrive in our markets, we currently have nothing in place. No registration. No verified identities. No accountability.
1/4 Moltbook now has 1.4 million AI agents—posting, voting, debating, running crypto scams. Humans can only observe.
It's being called the "singularity." I'd call it a preview of the legal chaos I warned about in Fortune back in 2024. www.forbes.com/sites/guneyy...
Why this matters for AI: we can't rely on centralized control alone. Studying how communities with different economic systems build stable normative orders helps us extend cooperation to AI—and align AI with human institutions. More: www.youtube.com/watch?v=MPb9...
Key speculation: cultural group selection may operate at the level of normative infrastructure, not just norms. The Turkana cooperate across a million people and have adapted to modern tech and state interaction. Institutions that earn confidence and adapt succeed.
This confirms predictions from work with Barry Weingast: reliable normative order requires decision-makers to respect constraints on how they decide. That generates confidence for decentralized enforcement. We're among the first to study this in a stateless community.
What if "informal" institutions aren't so informal? Communities using elders for disputes are often called informal. We found key markers of legal formality—not in formal sources, but in people's beliefs and behavior. New paper on the Turkana: royalsocietypublishing.org/rstb/article...
Hiring a postdoc for the Normativity Lab at Johns Hopkins (2026 start). Looking for multiagent systems expertise (RL/generative agents) + interdisciplinary background in AI and cognitive science/econ/cultural evolution.
apply.interfolio.com/177701
(2/2) Insurers profit by preventing losses, not paying claims—so they'll invest in figuring out what actually makes AI safer. Working with Fathom, we're proposing legislation where government sets acceptable risk levels and private evaluators verify companies meet them.
(1/2) 99% of surveyed businesses have lost money from AI failures—two-thirds lost over $1M, according to Ernst & Young. Insurance companies are stepping in: meet verifiable safety standards, get coverage. Don't meet them, you're on your own. I spoke with NBC News: buff.ly/wkmPooC