Sean O hEigeartaigh's Avatar

Sean O hEigeartaigh

@sean-o-h

Academic, AI nerd and science nerd more broadly. Currently obsessed with stravinsky (not sure how that happened).

3,829
Followers
278
Following
303
Posts
07.09.2024
Joined
Posts Following

Latest posts by Sean O hEigeartaigh @sean-o-h

issuing correction on a previous post of mine, regarding the group effective altruism. you perhaps occasionally, under certain circumstances, "gotta hand it to them"

27.02.2026 22:42 πŸ‘ 142 πŸ” 14 πŸ’¬ 2 πŸ“Œ 1
Preview
The Administration's move to cut all federal government use of Anthropic and designate it a Supply Chain Risk sets a dangerous precedent. There's a reason that national security leaders and… |... The Administration's move to cut all federal government use of Anthropic and designate it a Supply Chain Risk sets a dangerous precedent. There's a reason that national security leaders and libertaria...

Excellent statement from the Center for Democracy and Technology’s @alexreevegivens.bsky.social, who notes that concern about today’s action is wide ranging and bipartisan www.linkedin.com/posts/alexan...

28.02.2026 00:11 πŸ‘ 38 πŸ” 14 πŸ’¬ 1 πŸ“Œ 0

RT from Jonathan Birch on twitter. "Say what you like about Effective Altruism, Pete Hegseth is not ranting about Kantians or virtue theorists trying to thwart his mass surveillance/autonomous weapons programs."

28.02.2026 11:07 πŸ‘ 6 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
The Pentagon brands Anthropic CEO a β€˜liar’ with a β€˜God complex’ as deadline looms | Fortune The Department of War has given Anthropic until 5 p.m. Friday to remove restrictions on how the military can use its AI, or face being labeled a national security threat.

We are in a new world now for frontier AI governance. It is not a better world. Companies and their employees need to think very carefully indeed about what they do next. 2/2
fortune.com/2026/02/27/p...

28.02.2026 10:33 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
The Pentagon brands Anthropic CEO a β€˜liar’ with a β€˜God complex’ as deadline looms | Fortune The Department of War has given Anthropic until 5 p.m. Friday to remove restrictions on how the military can use its AI, or face being labeled a national security threat.

"Γ“ hΓ‰igeartaigh said that the outcome of the dispute could extend well beyond Anthropic itself. β€œIf the Pentagon comes out on top of this,” he said, β€œit will establish precedents that will not be good for the independence of these companies, or their ability to hold to ethical standards.”" 1/2

28.02.2026 10:33 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
The Pentagon brands Anthropic's CEO a 'liar' with a 'God complex' as deadline looms | Fortune The Department of War has given Anthropic until 5 p.m. Friday to remove restrictions on how the military can use its AI, or face being labeled a national security threat.

I'm quoted in this rather good Fortune article about the Anthropic/Pentagon situation. They got my wrole wrong (the CSER role is 3 years ago at this point; apologies to present CSER Director Sonja Amadae) but I'm happy with the quotes, so I'll take that.

fortune.com/2026/02/27/p...

27.02.2026 18:23 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Can India become the third pole in global AI dominance? Experts say it's hard to gauge how much the summit advanced India's aim to become a major third force in the global AI competition, Stuti Mishra reports

It was nice to discover I'm quoted in this Independent article on the India AI Summit:
www.independent.co.uk/asia/india/a...

27.02.2026 12:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
We Will Not Be Divided Employees of Google and OpenAI stand together to refuse the Department of War's demands to use AI models for domestic mass surveillance and autonomous killing without human oversight.

Google DeepMind and OpenAI colleagues: you know what to do.
notdivided.org

27.02.2026 10:21 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Intelligence Rising | CPH:DOX 2026
Intelligence Rising | CPH:DOX 2026 YouTube video by CPH:DOX

I was pleased to participate in this great wargame. No idea if I show up in doc, premiers next month. Incredible to see how far the collab btw Intelligence Rising, i3 Gen and Faculty has come. Carefully thinking thu these possibilities has never felt so necessary. www.youtube.com/watch?v=vVsn...

25.02.2026 11:20 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
The left is missing out on AI As a movement, it has largely refused to engage seriously with AI, ceding debate about a threat and opportunity to the right

I was pleased to contribute to this article, which I hear is now quite controversial on this site ;)
www.transformernews.ai/p/the-left-i...

18.02.2026 04:36 πŸ‘ 5 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Post image Post image Post image

This is incredibly good:

www.transformernews.ai/p/the-left-i...

16.02.2026 21:45 πŸ‘ 13 πŸ” 3 πŸ’¬ 2 πŸ“Œ 2
Post image

Nice to see our paper debunking claims of a link between autism and the microbiome among Neuron's "Most Read" articles... 😊 www.cell.com/neuron/fullt...

17.02.2026 07:58 πŸ‘ 80 πŸ” 20 πŸ’¬ 2 πŸ“Œ 1

Can't get my mind off this stuff, I guess. Anyway, recommended: nobody did it quite like LeGuin. 2/2

08.02.2026 14:16 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

This weekend's break reading was Ursula LeGuin's Lathe of Heaven. Not at all about AI, but could not be more apt to it: a cautionary tale about the risks of putting unimaginable power in the hands of a centrally-planned world /authoritarian government mindset, even with the best of intentions. 1/2

08.02.2026 14:16 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Predictable artificial intelligence

This paper wins my 'longest in the pipeline' award; the first preprint version hails from the misty forgotten era of 2023. I think it still has something to offer: a vision and characterisation of predictable AI. Privilege to be a small part of this great team.

www.sciencedirect.com/science/arti...

05.02.2026 18:21 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
Stop admiring the problem: let’s get to work on reducing catastrophic risk A response to the Bulletin of the Atomic Scientists 2026 Doomsday clock announcement from the Centre for the Study of Existential Risk at the University of Cambridge In a year when the Doomsday Clock ...

The Doomsday Clock is now 85 seconds to midnight.

In our response to the @thebulletin.org announcement, we reflect how new forms of global cooperation are essential for navigating today’s risks and how meaningful progress is still possible.
Read the full response below ⬇️
bit.ly/49KSNOC

27.01.2026 19:40 πŸ‘ 4 πŸ” 3 πŸ’¬ 0 πŸ“Œ 1

(4) What they're flagging quite clearly is either (i) that necessary steps won't be taken in time in absence of external pressure from governance or (ii) that the need is for every frontier company to agree voluntarily on these steps. Your pick re: which of these is the heavier lift.

Discuss. 5/5

21.01.2026 11:29 πŸ‘ 6 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

(2) If we assign even 20% likelihood, then taking the possibility seriously makes this one of the world's top priorities, if not the top priority.
(3) Even if they're out by a factor of 2, 10 years is very little time to prepare for what they're envisaging. 4/5

21.01.2026 11:29 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

(1) It's worth society assigning at least 20% chance to possibility these leading experts are right on scientific possibility of near-term AGI & need for more time to do it right. Are you >80% confident they're talking out of their hats, or running bizarre marketing/regulatory capture strategy? 3/5

21.01.2026 11:29 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Both basically making clear that they don't feel they are able to voluntarily as companies within a competitive situation.

My claims: 2/5

21.01.2026 11:29 πŸ‘ 6 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

CEOs of Anthropic and Deepmind (both AI scientists by background) this week predicting AGI in 2- and 5- years respectively. Both stating clearly that they would prefer a slow down or pause in progress, to address safety issues and to allow society and governance to catch up. 1/5

21.01.2026 11:29 πŸ‘ 16 πŸ” 3 πŸ’¬ 2 πŸ“Œ 1

rather than have these regions all be bystanders / collateral damage in an 'AI race' between the two superpowers.2/2

20.01.2026 20:40 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

It will be interesting to see if the shift currently happening fosters appetite for cooperation between middle powers on AI - e.g. Europe, Canada, Brazil and others. Because there are real opportunities there. And it could be very good for the world to have another pole 1/2

20.01.2026 20:40 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Thank you John! A privilege to provide a piece for the Bulletin.

07.01.2026 12:45 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

A snowy start to the New Year in Cambridge!❄️

05.01.2026 14:37 πŸ‘ 3 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
ILINA in 2026 Happy new year!

I'm so excited to see ILINA's upcoming year

cecilyongo.medium.com/ilina-in-202...

05.01.2026 13:06 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

You can write the sentence confidently? I can only aspire

17.12.2025 20:15 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

As this year draws near its close, my greatest hope is for a kinder 2026.

15.12.2025 16:29 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Stopping the Clock on catastrophic AI risk AI is already sufficiently robust that it introduces new global risks and exacerbates existing threats. Its development is being driven by some of the most powerful companies on Earth, and the technol...

An honour to provide a paper for the Bulletin's 80th anniversary, on the challenges AI poses for catastrophic risk the coming decade. An exercise in navigating uncertainty and the collingridge dilemma, with the stakes likely rising with each passing year.
thebulletin.org/premium/2025...

10.12.2025 16:43 πŸ‘ 4 πŸ” 4 πŸ’¬ 0 πŸ“Œ 1
Preview
Is China really racing for AGI? with SeÑn Ó hÉigeartaigh The Rhetoric and Reality of the AI Race

I really enjoyed chatting to Philip Bell about 'AGI race' claims and the AI race narrative overall. Link to podcast below

techfuturesproj.substack.com/p/is-china-r...

10.12.2025 09:44 πŸ‘ 6 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0