MIRI's Avatar

MIRI

@intelligence.org

For over two decades, the Machine Intelligence Research Institute (MIRI) has worked to understand and prepare for the critical challenges that humanity will face as it transitions to a world with artificial superintelligence.

67
Followers
5
Following
64
Posts
25.11.2024
Joined
Posts Following

Latest posts by MIRI @intelligence.org

To learn more, read Eliezer and Nate's recent NYT bestseller: ifanyonebuildsit.com

04.03.2026 23:57 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Senator Sanders met with Eliezer Yudkowsky, Nate Soares, Daniel Kokotajlo, and Jeffrey Ladish to discuss the extinction threat posed by the race to build superhuman AI systems.

04.03.2026 23:57 πŸ‘ 6 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Si alguien la crea, todos moriremos - Eliezer Yudkowsky, Nate Soares | PlanetadeLibros Si alguien la crea, todos moriremos, de Eliezer Yudkowsky y Nate Soares. Un llamamiento inaplazable para poner freno a la carrera hacia la superinteligencia.

ConseguΓ­ el tuyo acΓ‘: www.planetadelibros.us/libro-si-alg...

04.03.2026 20:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Ya disponible en espaΓ±ol: "Si alguien la crea, todos moriremos" de Eliezer Yudkowsky y @so8res.bsky.social.

El bestseller del New York Times sobre por quΓ© la superinteligencia artificial es una amenaza para la humanidad y por quΓ© la carrera por crearla debe detenerse.

04.03.2026 20:34 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

On BBC, MIRI CEO @malo.online discusses the dispute between Anthropic and the DoW:

"I really worry about the big questions of how we'll coordinate to set regulation, and potentially coordinate internationally[...] This is not a good first test."

04.03.2026 02:03 πŸ‘ 6 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
New Report: An International Agreement to Prevent the Premature Creation of Artificial Superintelligence | MIRI TGT Nov 18, 2025 - We at the MIRI Technical Governance Team have released a report describing an example international agreement to halt the advancement towards artificial superintelligence. The agreement...

Learn more about the proposal here: techgov.intelligence.org/blog/new-rep...

28.02.2026 02:30 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

This week at the IASEAI conference, MIRI researcher @pbarnett.bsky.social discusses how and why the Technical Governance Team's proposed international agreement could halt the development of superintelligence.

28.02.2026 02:30 πŸ‘ 9 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
AI technologies β€˜often behave in ways their creators don’t want,’ warns expert | CNN Paula Newton speaks with Nate Soares, co-author of β€œIf Anyone Builds It, Everyone Dies,” about the tense standoff between the artificial intelligence company Anthropic and the Pentagon and where the t...

Check out the full interview here:
www.cnn.com/2026/02/26/t...

27.02.2026 08:27 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

CNN's Paula Newton asks: Which is the greater threat? Governments racing to militarize AI, or companies racing to monetize it?

MIRI President Nate Soares says there are dangers from both, but also a third danger: AI systems themselves could become smart enough to defy control.

27.02.2026 08:27 πŸ‘ 4 πŸ” 1 πŸ’¬ 1 πŸ“Œ 1
Top AI researcher warns 'world is in peril'
Top AI researcher warns 'world is in peril' YouTube video by ABC News

I was on ABC News Live last week!

My first live interview. I think it went pretty well, especially considering I only had ~20 mins notice :p

16.02.2026 18:39 πŸ‘ 5 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
01.01.2026 04:09 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Final Update: From ~$450k earlier today, we’re now down to just over $250k left in unclaimed matching funds!

4 hours left to go, and by golly it looks like we’ve got a real shot at securing all the matching.

Thanks everyone! Happy New Year πŸŽ‰

01.01.2026 04:06 πŸ‘ 1 πŸ” 2 πŸ’¬ 0 πŸ“Œ 2

With just under 7 hours to go, we’re now down below $300k of unclaimed matching funds!

01.01.2026 01:20 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
31.12.2025 19:49 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Update 2: We’re down to ~$450k left of unclaimed matching funds, with just over 12 hours to go!

Thanks to all those who stepped up in the last couple of days to close the gap by ~$500k. ❀️

31.12.2025 19:48 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 1
31.12.2025 19:47 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

PSA: It’s worth reaching out to old donors, because sometimes this happens πŸ™‚

30.12.2025 18:44 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Update: We’ve received over $250k since this was posted.

~$700k in matching funds remaining.

30.12.2025 18:41 πŸ‘ 3 πŸ” 2 πŸ’¬ 0 πŸ“Œ 1

And of course, to everyone who’s already donated, including the >100 first-time MIRI donors who gave during the fundraiser, thank you!

If you’re looking for other ways to help, sharing this thread, or quote-posting it with why you chose to support us would mean a lot.

29.12.2025 22:55 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Donate - Machine Intelligence Research Institute Support MIRI’s research. Find out if your employer will match donations! Donate using ACH, PayPal, digital currency. MIRI is a 501(c)(3) nonprofit.

If you’d like to help us secure as much of the remaining matching funds as we can, we’d be grateful for your support.

(If you don’t see your preferred donation method, including cryptocurrencies, reach out to us at development@intelligence.org.)

29.12.2025 22:55 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Matching Pledges | Survival and Flourishing Fund Matching Pledges | Survival and Flourishing Fund

Why is this real counterfactual matching?

The funds come from a Survival and Flourishing Fund matching pledgeβ€”not a traditional grant.

You can learn more about SFF’s matching pledges here:

29.12.2025 22:55 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
MIRI's 2025 Fundraiser - Machine Intelligence Research Institute MIRI is running its first fundraiser in six years, targeting $6M. The first $1.6M raised will be matched 1:1 via an SFF grant. Fundraiser ends at midnight on Dec 31, 2025. Support our efforts to impro...

Donations to MIRI before Jan 1 are high-leverage. We’ve got ~$1.6M in 1:1 matching from SFF, over half of which has yet to be claimed!

This is real counterfactual matching: whatever doesn’t get matched by the end of Dec 31, we don’t get. 🧡

29.12.2025 22:55 πŸ‘ 3 πŸ” 2 πŸ’¬ 1 πŸ“Œ 5
Post image

”If Anyone Builds It, Everyone Dies” was recently added to the New Yorker's β€œThe Best Books of the Year So Far” list!

newyorker.com/best-books-2...

31.10.2025 02:30 πŸ‘ 5 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
New book argues superhuman AI puts humans on path to extinction
New book argues superhuman AI puts humans on path to extinction Nate Soares, the co-author of "If Anyone Builds It, Everyone Dies," argues in his new book that if any company builds an artificial superintelligence, it would end in human extinction. He joins "The…

β€œIf Anyone Builds It, Everyone Dies” coauthor Nate Soares recently chatted with Major Garrett on @cbsnews.com.

31.10.2025 01:29 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

@hankgreen.bsky.social rarely does interviews or 30+ min long videos.

His latest video, an hour+ long interview with Nate Soares about β€œIf Anyone Builds It, Everyone Dies,” is a banger. My new favorite!

www.youtube.com/watch?v=5CKu...

30.10.2025 20:52 πŸ‘ 6 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Nate Soares - If Anyone Builds It, Everyone Dies Nate Soares discusses the scramble to create superhuman AI that has us on a path to extinction. But it’s not too late to change course.

In the Bay Area? Come join Nate Soares, in conversation with Semafor Tech Editor Reed Albergotti, about Nate's NYT bestselling book β€œIf Anyone Builds It, Everyone Dies.”

πŸ—“οΈ Tuesday Oct 28 @ 7:30pm at Manny’s in SF.

Get your tickets:

24.10.2025 22:09 πŸ‘ 5 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
Post image

Academy Award winning director Kathryn Bigelow is reading β€œIf Anyone Builds It, Everyone Dies.”

From an interview in The Guardian by Danny Leigh: www.theguardian.com/film/2025/oc...

18.10.2025 15:12 πŸ‘ 5 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image

β€œThe book uses parables, very well told, to argue that evolutionary processes are not predictable, at least not easily. [...] I came away far more concerned than I had been before opening the book.”

www.forbes.com/sites/billco...

18.10.2025 01:18 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
How Afraid of the AI Apocalypse Should We Be? | The Ezra Klein Show
How Afraid of the AI Apocalypse Should We Be? | The Ezra Klein Show YouTube video by The Ezra Klein Show

Today’s episode of The Ezra Klein Show.

The researcher Eliezer Yudkowsky argues that we should be very afraid of artificial intelligence’s existential risks.
www.nytimes.com/2025/10/15/o...

youtu.be/2Nn0-kAE5c0?...

15.10.2025 13:40 πŸ‘ 58 πŸ” 14 πŸ’¬ 15 πŸ“Œ 10
Post image

Michael talks with Nate Soares, co-author of "If Anyone Builds It, Everyone Dies", on the risks of advanced artificial intelligence. Soares argues that humanity must treat AI risk as seriously as pandemics or nuclear war.
Hear the #bookclub #podcast πŸŽ§πŸ“– https://loom.ly/w1hBbWM

15.10.2025 13:30 πŸ‘ 7 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0