It’s been an honour to have been a part of it all.
- Corporate cage-free campaigns have led to billions of hens spared from caged confinement.
- AI safety has gone from a fringe concern to a thriving field.
And in the last year alone, money moved to effective charities was up by around 40%, now closing in on $2B per year.
It feels crazy that 10 years have passed, but a lot has happened since:
- The number of people taking Giving What We Can’s 10% pledge has grown tenfold.
- GiveWell has moved over $2 billion to highly effective global health and development charities, saving over 300,000 lives.
The core of the book is the same - explaining some principles for how we can have a bigger positive impact in our lives, whether through our donations, our career choice, or what we buy.
Link to buy the book in the comments!
I’m excited to say that the revised 10-yr anniversary edition of Doing Good Better is out now!
It’s got updated statistics and a new foreword, reflecting on the last ten years and responding to some key criticisms.
The full series has discussion of why this might be desirable, what the AGI project should focus on, and how to make this more likely.
Link: www.forethought.org/research/th...
By making non-US influence circumscribed in this way, and letting the US call the shots day to day, the proposal becomes both more feasible and less likely to get bogged down in bureaucracy.
The core idea is that we can get most of the benefits of an international project by giving non-US countries meaningful influence over only a relatively small number of decisions.
The main result is a proposal I call "Intelsat for AGI” — modelled the international project that developed the first global satellite communications network.
Today I’m publishing a series of research notes on the idea of an international AGI project.
The aim is to assess how desirable an international AGI project is, and what the best version of such a project is (taking feasibility into account).
This paper is aimed at a more academic audience, offers some new arguments and counterarguments, and provides a formal framework for thinking about existential vs. trajectory impact.
This paper complements my "Better Futures" series from last year. Better Futures argued that Flourishing (the quality of the future given survival) deserves as much attention as Surviving.
We argue longtermists should focus on the broader category of "grand challenges"—decisions that substantially affect the expected value of Earth-originating life, whether or not they involve existential risk.
This has practical implications: Maxipok can even recommend actions that are actively harmful, if they reduce x-risk while making the surviving future worse.
Lock-in events—around AGI, space governance, digital rights—could shape civilization's trajectory without being "existential" in the traditional sense. The value of the future lies on a spectrum, and we can shift where on that spectrum we land.
Third, that the very best goods are so much better than almost all others that only near-optimal futures matter.
None of these hold up. The future isn't binary. Power could be divided among groups with different values; some might optimize for good, others won't.
We assess three potential justifications for Dichotomy and reject each. First, that there's a wide basin of attraction, such that any sufficiently good civilization would tend towards the best. Second, that value is bounded above, making "good enough" essentially optimal.
If Dichotomy holds, then all that matters is shifting probability from the bad cluster to the good cluster—i.e., reducing existential risk.
The key assumption behind Maxipok is what we call "Dichotomy": that the future will either have little-to-zero value (catastrophe), or some specific extremely high value (everything else), and our actions can only move probability mass between these two poles.
Beyond Existential Risk:
In a new paper, Guive Assadi and I argue against Bostrom's "Maxipok" principle—that altruists should seek to maximize the probability of an "OK outcome," where OK just means avoiding existential catastrophe.
My current guesses for what viatopia looks like: material abundance, technological progress, coordination to avoid conflict, low catastrophic risk—plus preserving society-wide optionality, cultivating reflection, and structuring deliberation so better ideas win out.
A viatopia is a state of society that is *on track* for a near-best future, whatever that might look like. A teenager might not know what they want to do with their life, but know that a good education keeps their options open.
The transition to superintelligence will present many problems all at once, and may need to choose between very different solutions to the same problems. We need a way to prioritise and plan.
So I want to introduce a third framing: viatopia.
The main alternative is “protopianism”: solving the most urgent problems one by one, not guided by any big-picture view of society’s long-run course. I prefer protopianism to utopianism, but it gives up too much.
Almost no one has articulated a positive vision for what comes after superintelligence. What should we be trying to aim for?
Utopias from history look clearly dystopian to us, and we should expect the same for our own attempts. We don’t know enough, or have the authority, to decide the details.