Gillian Hadfield's Avatar

Gillian Hadfield

@ghadfield

Economist and legal scholar turned AI researcher focused on AI alignment and governance. Prof of government and policy and computer science at Johns Hopkins where I run the Normativity Lab. Recruiting CS postdocs and PhD students. gillianhadfield.org

1,227
Followers
1,147
Following
76
Posts
19.11.2024
Joined
Posts Following

Latest posts by Gillian Hadfield @ghadfield

Governments can’t translate “fair” or “safe” into technical specs fast enough. But leaving details to industry means the public loses its say. Regulatory markets close both gaps: governments set outcomes, private regulators compete to achieve them.

05.03.2026 18:50 👍 0 🔁 0 💬 0 📌 0
Preview
Regulatory Markets: The Future of AI Governance Regulatory markets can bridge technical and democratic gaps in AI governance by pairing public oversight with private, licensed regulatory innovation.

AI systems are quickly becoming embedded throughout the economy. But we have almost none of the regulatory tools, regulatory markets among them, to manage them. Here's what I think we should do about it: www.americanbar.org/groups/scien...

05.03.2026 18:50 👍 0 🔁 0 💬 2 📌 0

“The most practical governance framework currently in circulation.” That’s Forbes on the Independent Verification Organization model Fathom and I have been developing. Legislation takes years; IVOs move at the pace of innovation.

03.03.2026 15:20 👍 0 🔁 0 💬 1 📌 0
Post image

"Why not work on what kind of new governance is needed to ensure secure, reliable, predictable use of all frontier models, from all companies?"

02.03.2026 23:06 👍 0 🔁 0 💬 0 📌 0
Preview
FAR.AI: Frontier Alignment Research FAR.AI is an AI safety research non-profit facilitating technical breakthroughs and fostering global collaboration.

In London today and tomorrow for the Alignment Workshop organized by FAR.AI. Keynoting alongside Rohin Shah and Allan Dafoe. I look forward to seeing everyone in attendance! www.far.ai/events/event...

02.03.2026 16:45 👍 2 🔁 0 💬 0 📌 0
Preview
International AI Safety Report The International AI Safety Report is the world's first comprehensive review of the latest science on the capabilities and risks of general-purpose AI systems. The work was overseen by an…

The 2026 AI Safety Report's biggest finding isn't the risks it catalogs. It's the evidence gap. We're trying to build AI governance with almost no science underneath. Massive investment in the research regulatory systems depend on is overdue. internationalaisafetyreport.org

28.02.2026 00:12 👍 1 🔁 0 💬 0 📌 0
Screenshot of a LinkedIn post by Jack Shanahan (Retired USAF; Project Maven/DoD JAIC; NCSI MIS; SCSP Defense Partnership), posted 3 hours ago. In the post, Shanahan weighs in on the Anthropic-Pentagon dispute, noting that despite his Project Maven background, he's sympathetic to Anthropic's position. He argues no current LLM should be used in fully lethal autonomous weapons systems, calling that a reasonable red line, and opposes mass surveillance of US citizens as a second red line. He criticizes the public nature of the dispute, calls the supply chain risk designation "laughable," questions invoking the DPA against the company's will, and advocates for shared government-industry-academia governance of frontier AI models.

Screenshot of a LinkedIn post by Jack Shanahan (Retired USAF; Project Maven/DoD JAIC; NCSI MIS; SCSP Defense Partnership), posted 3 hours ago. In the post, Shanahan weighs in on the Anthropic-Pentagon dispute, noting that despite his Project Maven background, he's sympathetic to Anthropic's position. He argues no current LLM should be used in fully lethal autonomous weapons systems, calling that a reasonable red line, and opposes mass surveillance of US citizens as a second red line. He criticizes the public nature of the dispute, calls the supply chain risk designation "laughable," questions invoking the DPA against the company's will, and advocates for shared government-industry-academia governance of frontier AI models.

Why not work on new governance...

27.02.2026 23:39 👍 0 🔁 0 💬 0 📌 0
Preview
Announcing the "AI Agent Standards Initiative" for Interoperable and Secure Innovation The Initiative will ensure that the next generation of AI is widely adopted with confidence, can function securely on behalf of its users, and can interoperate smoothly across the digital ecosystem.

NIST just launched an AI Agent Standards Initiative for identity, security, and interoperability. AI agents are becoming economic actors with zero legal infrastructure in place. We require businesses to register to operate. Why expect less of AI agents? buff.ly/kTU2cfX

25.02.2026 14:29 👍 1 🔁 3 💬 0 📌 0
Preview
IASEAI - International Association for Safe and Ethical AI Building a global movement for safe and ethical AI. Join IASEAI to ensure AI systems operate safely and ethically, benefiting all of humanity.

In Paris this week for IASEAI (Feb 24-26). Tuesday: panel on the International AI Safety Report. Thursday: keynote on regulatory markets, a panel on AI assurance, and a talk in Seth Lazar’s workshop on normative competence. If you’re at IASEAI, come say hello!

23.02.2026 19:54 👍 4 🔁 3 💬 0 📌 0
Preview
Panel Members | Independent International Scientific Panel on AI The 40 members of the Independent International Scientific Panel on AI include people from all five of the UN’s regions. They are from various different backgrounds, including academia, private…

Congratulations to Yoshua Bengio and the 39 other experts appointed to the UN’s first Independent International Scientific Panel on AI. 117-2 in the General Assembly.

20.02.2026 23:39 👍 0 🔁 0 💬 0 📌 0
AI Won’t Automatically Make Legal Services Cheaper - Curl, Kapoor & Narayanan

Better technology doesn’t fix broken institutions. The paper discusses regulatory markets as one path forward: instead of regulating providers directly, create a market for regulation itself. Worth a careful read. buff.ly/kbfvYqN

18.02.2026 16:45 👍 2 🔁 0 💬 0 📌 0
Post image

New in Lawfare from Justin Curl, Sayash Kapoor, & Arvind Narayanan: AI won’t automatically make legal services cheaper. I’ve been working on this for a long time, legal markets are broken because of adversarial dynamics, credence goods problems, & regulations that protect incumbents, not consumers.

18.02.2026 16:45 👍 3 🔁 0 💬 1 📌 0
Preview
Live from Ashby: Adaptive AI Governance with Gillian Hadfield and Andrew Freedman Podcast Episode · Scaling Laws · 02/17/2026 · 55m

Billions going into building AI, barely any into making sure it works for us. Talked with @kevintfrazier.bsky.social & Andrew Freedman about our proposal making its way through state legislatures to build a competitive market for AI oversight. New @scalinglaws.bsky.social podcast:

17.02.2026 15:12 👍 7 🔁 2 💬 0 📌 1
Preview
Talk, Judge, Cooperate: Gossip-Driven Indirect Reciprocity in Self-Interested LLM Agents Indirect reciprocity, which means helping those who help others, is difficult to sustain among decentralized, self-interested LLM agents without reliable reputation systems. We introduce Agentic…

6/ Led by Shuhui Zhu with Yue Lin, Shriya Kaistha, Wenhao Li, Baoxiang Wang, Hongyuan Zha, and Pascal Poupart across Waterloo, Vector Institute, CUHK-Shenzhen, and Tongji. arxiv.org/abs/2602.07777

13.02.2026 00:51 👍 0 🔁 0 💬 0 📌 0

5/ We don't need AI agents that default to "nice." We need agents that understand when cooperation makes sense and when it doesn't. That takes institutional structure, not just training. Gossip turns out to be surprisingly powerful institutional structure.

13.02.2026 00:51 👍 0 🔁 0 💬 1 📌 0

4/ Some chat models did something different and arguably more troubling. They cooperated even when defection was the rational play. That looks like alignment on the surface, but it's cooperation without the reasoning to know when it should stop.

13.02.2026 00:51 👍 0 🔁 0 💬 1 📌 0

3/ The surprise: reasoning models defect every time without gossip, exactly as theory predicts. Give them reputational information and they flip to strategic cooperation. They figure out that cooperation pays when others can see what you're doing.

13.02.2026 00:51 👍 1 🔁 0 💬 1 📌 0

2/ Our new ALIGN framework gives LLM agents a protocol for sharing reputational information, and that alone sustains cooperation in decentralized systems. Agents praise cooperators, criticize defectors, and adjust their behavior based on what they hear.

13.02.2026 00:51 👍 0 🔁 0 💬 1 📌 0

1/ What makes self-interested AI agents cooperate? Not fine-tuning. Not central oversight. Gossip.

13.02.2026 00:51 👍 3 🔁 1 💬 1 📌 0
Preview
How to prevent millions of invisible law-free AI agents casually wreaking economic havoc | Fortune AI developers and investors are looking to create digital economic actors, with the capacity to do just about anything.

4/4 My Fortune piece: fortune.com/2024/10/17/a...

02.02.2026 23:06 👍 3 🔁 0 💬 0 📌 0

3/4 We require businesses to register before they can operate. Shouldn't we expect the same basic legal infrastructure before billions of AI agents start transacting on our behalf?

02.02.2026 23:06 👍 2 🔁 0 💬 1 📌 0

2/4 These agents aren't entering contracts yet. But AI companies are racing to build agents that can buy, sell, manage finances. When they arrive in our markets, we currently have nothing in place. No registration. No verified identities. No accountability.

02.02.2026 23:06 👍 2 🔁 0 💬 1 📌 0

1/4 Moltbook now has 1.4 million AI agents—posting, voting, debating, running crypto scams. Humans can only observe.

It's being called the "singularity." I'd call it a preview of the legal chaos I warned about in Fortune back in 2024. www.forbes.com/sites/guneyy...

02.02.2026 23:06 👍 4 🔁 3 💬 1 📌 0
Gillian Hadfield - Alignment is social: lessons from human alignment for AI
Gillian Hadfield - Alignment is social: lessons from human alignment for AI Current approaches conceptualize the alignment challenge as one of eliciting individual human preferences and training models to choose outputs that that satisfy those preferences. To the extent…

Why this matters for AI: we can't rely on centralized control alone. Studying how communities with different economic systems build stable normative orders helps us extend cooperation to AI—and align AI with human institutions. More: www.youtube.com/watch?v=MPb9...

21.01.2026 20:45 👍 3 🔁 0 💬 0 📌 0

Key speculation: cultural group selection may operate at the level of normative infrastructure, not just norms. The Turkana cooperate across a million people and have adapted to modern tech and state interaction. Institutions that earn confidence and adapt succeed.

21.01.2026 20:45 👍 1 🔁 0 💬 1 📌 0

This confirms predictions from work with Barry Weingast: reliable normative order requires decision-makers to respect constraints on how they decide. That generates confidence for decentralized enforcement. We're among the first to study this in a stateless community.

21.01.2026 20:45 👍 0 🔁 0 💬 1 📌 0
Preview
Metanorms generate stable yet adaptable normative social order in a politically decentralized society Abstract. Norms are essential for social stability but can hinder adaptability in changing environments. Yet human societies have found ways to modify exis

What if "informal" institutions aren't so informal? Communities using elders for disputes are often called informal. We found key markers of legal formality—not in formal sources, but in people's beliefs and behavior. New paper on the Turkana: royalsocietypublishing.org/rstb/article...

21.01.2026 20:45 👍 4 🔁 1 💬 1 📌 0
Apply - Interfolio

Hiring a postdoc for the Normativity Lab at Johns Hopkins (2026 start). Looking for multiagent systems expertise (RL/generative agents) + interdisciplinary background in AI and cognitive science/econ/cultural evolution.
apply.interfolio.com/177701

16.12.2025 15:54 👍 6 🔁 11 💬 0 📌 1

(2/2) Insurers profit by preventing losses, not paying claims—so they'll invest in figuring out what actually makes AI safer. Working with Fathom, we're proposing legislation where government sets acceptable risk levels and private evaluators verify companies meet them.

20.11.2025 00:00 👍 1 🔁 0 💬 0 📌 0
Preview
Insurance companies are trying to avoid big payouts by making AI safer As government regulation lags, some insurance companies see a business case for pushing AI companies to minimize risk and adopt stronger guardrails.

(1/2) 99% of surveyed businesses have lost money from AI failures—two-thirds lost over $1M, according to Ernst & Young. Insurance companies are stepping in: meet verifiable safety standards, get coverage. Don't meet them, you're on your own. I spoke with NBC News: buff.ly/wkmPooC

20.11.2025 00:00 👍 4 🔁 2 💬 1 📌 0