issuing correction on a previous post of mine, regarding the group effective altruism. you perhaps occasionally, under certain circumstances, "gotta hand it to them"
issuing correction on a previous post of mine, regarding the group effective altruism. you perhaps occasionally, under certain circumstances, "gotta hand it to them"
Excellent statement from the Center for Democracy and Technologyβs @alexreevegivens.bsky.social, who notes that concern about todayβs action is wide ranging and bipartisan www.linkedin.com/posts/alexan...
RT from Jonathan Birch on twitter. "Say what you like about Effective Altruism, Pete Hegseth is not ranting about Kantians or virtue theorists trying to thwart his mass surveillance/autonomous weapons programs."
We are in a new world now for frontier AI governance. It is not a better world. Companies and their employees need to think very carefully indeed about what they do next. 2/2
fortune.com/2026/02/27/p...
"Γ hΓigeartaigh said that the outcome of the dispute could extend well beyond Anthropic itself. βIf the Pentagon comes out on top of this,β he said, βit will establish precedents that will not be good for the independence of these companies, or their ability to hold to ethical standards.β" 1/2
I'm quoted in this rather good Fortune article about the Anthropic/Pentagon situation. They got my wrole wrong (the CSER role is 3 years ago at this point; apologies to present CSER Director Sonja Amadae) but I'm happy with the quotes, so I'll take that.
fortune.com/2026/02/27/p...
It was nice to discover I'm quoted in this Independent article on the India AI Summit:
www.independent.co.uk/asia/india/a...
Google DeepMind and OpenAI colleagues: you know what to do.
notdivided.org
I was pleased to participate in this great wargame. No idea if I show up in doc, premiers next month. Incredible to see how far the collab btw Intelligence Rising, i3 Gen and Faculty has come. Carefully thinking thu these possibilities has never felt so necessary. www.youtube.com/watch?v=vVsn...
I was pleased to contribute to this article, which I hear is now quite controversial on this site ;)
www.transformernews.ai/p/the-left-i...
This is incredibly good:
www.transformernews.ai/p/the-left-i...
Nice to see our paper debunking claims of a link between autism and the microbiome among Neuron's "Most Read" articles... π www.cell.com/neuron/fullt...
Can't get my mind off this stuff, I guess. Anyway, recommended: nobody did it quite like LeGuin. 2/2
This weekend's break reading was Ursula LeGuin's Lathe of Heaven. Not at all about AI, but could not be more apt to it: a cautionary tale about the risks of putting unimaginable power in the hands of a centrally-planned world /authoritarian government mindset, even with the best of intentions. 1/2
This paper wins my 'longest in the pipeline' award; the first preprint version hails from the misty forgotten era of 2023. I think it still has something to offer: a vision and characterisation of predictable AI. Privilege to be a small part of this great team.
www.sciencedirect.com/science/arti...
The Doomsday Clock is now 85 seconds to midnight.
In our response to the @thebulletin.org announcement, we reflect how new forms of global cooperation are essential for navigating todayβs risks and how meaningful progress is still possible.
Read the full response below β¬οΈ
bit.ly/49KSNOC
(4) What they're flagging quite clearly is either (i) that necessary steps won't be taken in time in absence of external pressure from governance or (ii) that the need is for every frontier company to agree voluntarily on these steps. Your pick re: which of these is the heavier lift.
Discuss. 5/5
(2) If we assign even 20% likelihood, then taking the possibility seriously makes this one of the world's top priorities, if not the top priority.
(3) Even if they're out by a factor of 2, 10 years is very little time to prepare for what they're envisaging. 4/5
(1) It's worth society assigning at least 20% chance to possibility these leading experts are right on scientific possibility of near-term AGI & need for more time to do it right. Are you >80% confident they're talking out of their hats, or running bizarre marketing/regulatory capture strategy? 3/5
Both basically making clear that they don't feel they are able to voluntarily as companies within a competitive situation.
My claims: 2/5
CEOs of Anthropic and Deepmind (both AI scientists by background) this week predicting AGI in 2- and 5- years respectively. Both stating clearly that they would prefer a slow down or pause in progress, to address safety issues and to allow society and governance to catch up. 1/5
rather than have these regions all be bystanders / collateral damage in an 'AI race' between the two superpowers.2/2
It will be interesting to see if the shift currently happening fosters appetite for cooperation between middle powers on AI - e.g. Europe, Canada, Brazil and others. Because there are real opportunities there. And it could be very good for the world to have another pole 1/2
Thank you John! A privilege to provide a piece for the Bulletin.
A snowy start to the New Year in Cambridge!βοΈ
I'm so excited to see ILINA's upcoming year
cecilyongo.medium.com/ilina-in-202...
You can write the sentence confidently? I can only aspire
As this year draws near its close, my greatest hope is for a kinder 2026.
An honour to provide a paper for the Bulletin's 80th anniversary, on the challenges AI poses for catastrophic risk the coming decade. An exercise in navigating uncertainty and the collingridge dilemma, with the stakes likely rising with each passing year.
thebulletin.org/premium/2025...
I really enjoyed chatting to Philip Bell about 'AGI race' claims and the AI race narrative overall. Link to podcast below
techfuturesproj.substack.com/p/is-china-r...