Rob Bensinger's Avatar

Rob Bensinger

@robbensinger

594
Followers
25
Following
56
Posts
15.11.2024
Joined
Posts Following

Latest posts by Rob Bensinger @robbensinger

AI Expert Tells Bernie: “The Humans will be Discarded”
AI Expert Tells Bernie: “The Humans will be Discarded” YouTube video by Senator Bernie Sanders

Will AI become smarter than humans?

If so, is humanity in danger?

I went to Silicon Valley to ask some of the leading AI experts that question.

Here’s what they had to say:

04.03.2026 22:21 👍 581 🔁 163 💬 95 📌 44

A tale of two warning shots, #1: COVID happened. Scientists are divided on whether it was a lab leak. The world did not rally against dangerous viral research in labs. The warning shot was squandered.

19.02.2026 17:07 👍 2 🔁 1 💬 1 📌 0
Preview
A Near-Term Policy for Not Getting Killed by AI Hundreds of scientists, including three of the four most cited living AI scientists, have said that AI poses a very real chance of killing us all.

In post form: nothingismere.substack.com/p/a-near-ter...

13.02.2026 04:07 👍 2 🔁 0 💬 0 📌 0
Preview
A Near-Term Policy for Not Getting Killed by AI Hundreds of scientists, including three of the four most cited living AI scientists, have said that AI poses a very real chance of killing us all.

nothingismere.substack.com/p/a-near-ter...

13.02.2026 04:06 👍 3 🔁 0 💬 0 📌 0

bsky.app/profile/robb...

12.02.2026 23:44 👍 1 🔁 0 💬 0 📌 0
Preview
What Would It Take to Shut Down Global AI Development? | If Anyone Builds It, Everyone Dies Resources and Q&A for the book If Anyone Builds It, Everyone Dies.

¹¹ ifanyonebuildsit.com/13/what-woul...
¹² forbes.com/sites/federi...
¹³ ifanyonebuildsit.com/treaty (different version at arxiv.org/pdf/2511.10783)
¹⁴ reuters.com/world/china/...
¹⁵ archive.ph/K9mVn

12.02.2026 23:41 👍 1 🔁 0 💬 0 📌 0
Preview
How much does it cost to train frontier AI models? The cost of training top AI models has grown 2-3x annually for the past eight years. By 2027, the largest models could cost over a billion dollars.

Sources:
¹ epoch.ai/blog/how-muc...
² epoch.ai/data-insight...
³ arxiv.org/pdf/2408.16074
⁴ datacenterdynamics.com/en/news/tsmc...
⁵ wsj.com/tech/a-criti...
⁶ , ⁹ cset.georgetown.edu/publication/...
⁷ intelligence.org/wp-content/u...
⁸ x.com/ESYudkowsky/...
¹⁰ arxiv.org/pdf/2511.10783

12.02.2026 23:41 👍 1 🔁 0 💬 1 📌 1

My hope, in writing this, is to wake people up a bit faster. If you share that hope, maybe share this post, or join the conversation about it; or write your own, better version of a "wake-up" warning. Don't give up on the world so easily.

12.02.2026 23:40 👍 1 🔁 0 💬 1 📌 0

Policies that prevent human extinction are good for liberal democracies and for authoritarian regimes, so clueful people on all sides will endorse those policies.

The question, again, is just whether people will clue in to what's happening soon enough to matter.

12.02.2026 23:39 👍 1 🔁 0 💬 1 📌 0

The CCP is a US adversary. That doesn't mean they're idiots who will destroy their own country in order to thumb their nose at the US. If a policy is Good, that doesn't mean that everyone Bad will automatically oppose it.

12.02.2026 23:39 👍 1 🔁 0 💬 1 📌 0
Post image

The pitch "We can't let China beat us at Russian roulette!" is not very compelling. Even if you suspect China might be unwilling to make a deal, there's zero cost to making an attempt. And the US has already expressed some interest in brokering an international agreement as well:

12.02.2026 23:37 👍 1 🔁 0 💬 1 📌 0
Post image Post image

And, quoting The Economist:¹⁵

12.02.2026 23:35 👍 1 🔁 0 💬 1 📌 0

A: The CCP has already expressed interest in international coordination and regulation on AI. E.g., Reuters reported that Chinese Premier Li Qiang said, "We should strengthen coordination to form a global AI governance framework that has broad consensus as soon as possible."¹⁴

12.02.2026 23:29 👍 3 🔁 0 💬 1 📌 0

So the nuclear analogy is pretty limited in what it can tell us. But it can tell us that international law and norms have enormous power.

- Q: But what about China? Surely they’d never agree to an arrangement like this.

12.02.2026 23:29 👍 1 🔁 0 💬 1 📌 0

Going from "zero superintelligences" to "one superintelligence" is already lethally dangerous. The challenge is to block the construction of ASI while there's still time, not to limit proliferation after it already exists, when it's far too late to take the steering wheel.

12.02.2026 23:28 👍 1 🔁 0 💬 1 📌 0

... and were instead facing a world where dozens or hundreds of nations possess nuclear weapons.

When it comes to superintelligence, anyone building "god-like AI" is likely to get us all killed — whether the developer is a military or a company, and whether their intentions are good or ill.

12.02.2026 23:27 👍 1 🔁 0 💬 1 📌 0

By analogy, nuclear nonproliferation efforts haven’t been perfectly successful. Over the past 75 years, the number of nuclear powers has grown from 2 to 9. But this is a much more survivable state of affairs than if we hadn’t tried to limit proliferation at all...

12.02.2026 23:26 👍 1 🔁 0 💬 1 📌 0

If instead a tiny fraction of the world is trying to find sneaky ways to build a small researcher-starved frontier AI project here and there, while dealing with enormous international pressure and censure, then that seems like a much more survivable situation.

12.02.2026 23:26 👍 1 🔁 0 💬 1 📌 0

... (and no, I don't think this is realistic in the current landscape)...

... that chance increasingly goes out the window as the race heats up, because prioritizing safety will mean sacrificing your competitive edge.

12.02.2026 23:25 👍 1 🔁 0 💬 1 📌 0

If the whole world is racing to build superintelligence as fast as possible, then we’re very likely dead. Even if you think there's a chance that cautious devs could stay in control as AI starts to vastly exceed the intelligence of the human race...

12.02.2026 23:25 👍 1 🔁 0 💬 1 📌 0

A: It’s very rare for countries (or companies!) to deliberately violate international law. It’s rare for countries to take actions that are widely seen as serious threats to other nations’ security. (If it weren't rare, it wouldn't be a big news story when it does happen!)

12.02.2026 23:24 👍 1 🔁 0 💬 1 📌 0

- Q: But surely there will be countries that end up defecting from such an agreement. Even if you’re right that it’s in no one’s interest to race once they understand the situation, plenty of people won’t understand the situation, and will just see superintelligent AI as a way to get rich quick.

12.02.2026 23:24 👍 1 🔁 0 💬 1 📌 0

(Some templates of agreements that would do the job have already been drafted.¹³)

Govs can create a deterrence regime by articulating clear limits and enforcement actions. It’s in no country’s interest to race to its own destruction, and a deterrence regime like this provides an alternative path.

12.02.2026 23:23 👍 1 🔁 0 💬 1 📌 0

Q: But if the US halts, isn’t that just ceding the race to authoritarian regimes?

A: The US shouldn’t halt unilaterally; that would just drive AI research to other countries. Rather, the US should broker an international agreement where everyone agrees to halt simultaneously.

12.02.2026 23:23 👍 1 🔁 0 💬 1 📌 0

What's left is to dial up the volume on that talk, translate that talk into planning and fast action, and recognize that "there's uncertainty how much time we have left" makes this a more urgent problem, not less.

12.02.2026 23:22 👍 1 🔁 0 💬 1 📌 0

At that point, the cat has already firmly left the bag. (And it's not as though there's anything unusual about governments heavily regulating powerful new technologies.)

12.02.2026 23:22 👍 1 🔁 0 💬 1 📌 0

Building superintelligence is unpopular with the voting public,¹² and hundreds of elected officials have already named this issue as a serious priority. The UN Secretary-General and major heads of state are routinely talking about AI loss-of-control scenarios and human extinction.

12.02.2026 23:21 👍 1 🔁 0 💬 1 📌 0

A: It will require science communicators to alert policymakers to the current situation, and it will require policymakers to come together to craft a solution. But it doesn’t seem at all infeasible.

12.02.2026 23:21 👍 1 🔁 0 💬 1 📌 0

The typical consumer wouldn’t even necessarily see any difference, since the typical consumer doesn’t run a data center. They just wouldn’t see dramatic improvements to the chatbots they use.

-Q: But isn’t this politically infeasible?

12.02.2026 23:21 👍 1 🔁 0 💬 1 📌 0

- Q: Isn't this totalitarian?

A: Governments regulate thousands of technologies. Adding one more to the list won’t suddenly tip the world over into a totalitarian dystopia, any more than banning chemical or biological weapons did.

12.02.2026 23:20 👍 1 🔁 0 💬 1 📌 0