's Avatar

@wdmacaskill

643
Followers
227
Following
232
Posts
14.11.2024
Joined
Posts Following

Latest posts by @wdmacaskill

Preview
Doing Good Better | Linktree Effective Altruism and a Radical New Way to Make a Difference from William MacAskill

linktr.ee/DoingGoodBe...

25.02.2026 16:40 👍 2 🔁 0 💬 0 📌 0

It’s been an honour to have been a part of it all.

25.02.2026 16:40 👍 1 🔁 0 💬 1 📌 0

- Corporate cage-free campaigns have led to billions of hens spared from caged confinement.
- AI safety has gone from a fringe concern to a thriving field.

And in the last year alone, money moved to effective charities was up by around 40%, now closing in on $2B per year.

25.02.2026 16:40 👍 3 🔁 0 💬 1 📌 0

It feels crazy that 10 years have passed, but a lot has happened since:
- The number of people taking Giving What We Can’s 10% pledge has grown tenfold.
- GiveWell has moved over $2 billion to highly effective global health and development charities, saving over 300,000 lives.

25.02.2026 16:40 👍 4 🔁 0 💬 1 📌 0

The core of the book is the same - explaining some principles for how we can have a bigger positive impact in our lives, whether through our donations, our career choice, or what we buy.

Link to buy the book in the comments!

25.02.2026 16:40 👍 2 🔁 0 💬 1 📌 0
Post image

I’m excited to say that the revised 10-yr anniversary edition of Doing Good Better is out now!

It’s got updated statistics and a new foreword, reflecting on the last ten years and responding to some key criticisms.

25.02.2026 16:40 👍 9 🔁 0 💬 1 📌 2
Preview
The International AGI Project Series | Government Collaboration Research series on international AGI collaboration. Explores the "Intelsat for AGI" model for government cooperation on artificial general intelligence.

The full series has discussion of why this might be desirable, what the AGI project should focus on, and how to make this more likely.

Link: www.forethought.org/research/th...

27.01.2026 19:59 👍 0 🔁 0 💬 0 📌 0

By making non-US influence circumscribed in this way, and letting the US call the shots day to day, the proposal becomes both more feasible and less likely to get bogged down in bureaucracy.

27.01.2026 19:59 👍 0 🔁 0 💬 2 📌 0

The core idea is that we can get most of the benefits of an international project by giving non-US countries meaningful influence over only a relatively small number of decisions.

27.01.2026 19:59 👍 0 🔁 0 💬 1 📌 0

The main result is a proposal I call "Intelsat for AGI” — modelled the international project that developed the first global satellite communications network.

27.01.2026 19:59 👍 1 🔁 0 💬 1 📌 0

Today I’m publishing a series of research notes on the idea of an international AGI project.

The aim is to assess how desirable an international AGI project is, and what the best version of such a project is (taking feasibility into account).

27.01.2026 19:59 👍 5 🔁 0 💬 1 📌 0
Preview
Against Maxipok: existential risk isn’t everything — EA Forum We argue against the view that reducing existential risk should be the sole priority for improving the long-term future: non-catastrophic lock-in events can matter just as much.

EA Forum post here: forum.effectivealtruism.org/posts/qhdk8...

21.01.2026 14:18 👍 1 🔁 0 💬 0 📌 0
Preview
Against Maxipok Existential risk isn’t everything

Substack here: newsletter.forethought.org/p/against-m...

21.01.2026 14:18 👍 0 🔁 0 💬 1 📌 0
Preview
Beyond Existential Risk Bostrom's Maxipok principle suggests reducing existential risk should be the overwhelming focus for improving humanity’s long-term prospects. But this rests on an implicitly dichotomous view of future value, where most outcomes are either near-worthless or near-best. Against Maxipok, we argue it is possible to substantially influence the long-term future by other channels than reducing existential risk — including how values, institutions, and power distributions become locked in.

Full paper here: www.forethought.org/research/be...

21.01.2026 14:18 👍 1 🔁 0 💬 1 📌 0

This paper is aimed at a more academic audience, offers some new arguments and counterarguments, and provides a formal framework for thinking about existential vs. trajectory impact.

21.01.2026 14:18 👍 0 🔁 0 💬 1 📌 0

This paper complements my "Better Futures" series from last year. Better Futures argued that Flourishing (the quality of the future given survival) deserves as much attention as Surviving.

21.01.2026 14:18 👍 0 🔁 0 💬 1 📌 0

We argue longtermists should focus on the broader category of "grand challenges"—decisions that substantially affect the expected value of Earth-originating life, whether or not they involve existential risk.

21.01.2026 14:18 👍 0 🔁 0 💬 1 📌 0

This has practical implications: Maxipok can even recommend actions that are actively harmful, if they reduce x-risk while making the surviving future worse.

21.01.2026 14:18 👍 0 🔁 0 💬 1 📌 0

Lock-in events—around AGI, space governance, digital rights—could shape civilization's trajectory without being "existential" in the traditional sense. The value of the future lies on a spectrum, and we can shift where on that spectrum we land.

21.01.2026 14:18 👍 0 🔁 0 💬 1 📌 0

Third, that the very best goods are so much better than almost all others that only near-optimal futures matter.

None of these hold up. The future isn't binary. Power could be divided among groups with different values; some might optimize for good, others won't.

21.01.2026 14:18 👍 0 🔁 0 💬 1 📌 0

We assess three potential justifications for Dichotomy and reject each. First, that there's a wide basin of attraction, such that any sufficiently good civilization would tend towards the best. Second, that value is bounded above, making "good enough" essentially optimal.

21.01.2026 14:18 👍 0 🔁 0 💬 1 📌 0

If Dichotomy holds, then all that matters is shifting probability from the bad cluster to the good cluster—i.e., reducing existential risk.

21.01.2026 14:18 👍 0 🔁 0 💬 1 📌 0

The key assumption behind Maxipok is what we call "Dichotomy": that the future will either have little-to-zero value (catastrophe), or some specific extremely high value (everything else), and our actions can only move probability mass between these two poles.

21.01.2026 14:18 👍 0 🔁 0 💬 1 📌 0

Beyond Existential Risk:

In a new paper, Guive Assadi and I argue against Bostrom's "Maxipok" principle—that altruists should seek to maximize the probability of an "OK outcome," where OK just means avoiding existential catastrophe.

21.01.2026 14:18 👍 4 🔁 0 💬 2 📌 1
Preview
What sort of post-superintelligence society should we aim for? The case for ‘viatopia’: a state of society that is on track for a near-best future, whatever that might look like.

newsletter.forethought.org/p/viatopia

08.01.2026 10:09 👍 2 🔁 0 💬 0 📌 0

My current guesses for what viatopia looks like: material abundance, technological progress, coordination to avoid conflict, low catastrophic risk—plus preserving society-wide optionality, cultivating reflection, and structuring deliberation so better ideas win out.

08.01.2026 10:09 👍 0 🔁 0 💬 1 📌 0

A viatopia is a state of society that is *on track* for a near-best future, whatever that might look like. A teenager might not know what they want to do with their life, but know that a good education keeps their options open.

08.01.2026 10:09 👍 0 🔁 0 💬 1 📌 0

The transition to superintelligence will present many problems all at once, and may need to choose between very different solutions to the same problems. We need a way to prioritise and plan.

So I want to introduce a third framing: viatopia.

08.01.2026 10:08 👍 1 🔁 0 💬 1 📌 0

The main alternative is “protopianism”: solving the most urgent problems one by one, not guided by any big-picture view of society’s long-run course. I prefer protopianism to utopianism, but it gives up too much.

08.01.2026 10:08 👍 0 🔁 0 💬 1 📌 0
Post image

Almost no one has articulated a positive vision for what comes after superintelligence. What should we be trying to aim for?

Utopias from history look clearly dystopian to us, and we should expect the same for our own attempts. We don’t know enough, or have the authority, to decide the details.

08.01.2026 10:08 👍 1 🔁 1 💬 2 📌 0