Will AI become smarter than humans?
If so, is humanity in danger?
I went to Silicon Valley to ask some of the leading AI experts that question.
Here’s what they had to say:
Will AI become smarter than humans?
If so, is humanity in danger?
I went to Silicon Valley to ask some of the leading AI experts that question.
Here’s what they had to say:
A tale of two warning shots, #1: COVID happened. Scientists are divided on whether it was a lab leak. The world did not rally against dangerous viral research in labs. The warning shot was squandered.
bsky.app/profile/robb...
¹¹ ifanyonebuildsit.com/13/what-woul...
¹² forbes.com/sites/federi...
¹³ ifanyonebuildsit.com/treaty (different version at arxiv.org/pdf/2511.10783)
¹⁴ reuters.com/world/china/...
¹⁵ archive.ph/K9mVn
Sources:
¹ epoch.ai/blog/how-muc...
² epoch.ai/data-insight...
³ arxiv.org/pdf/2408.16074
⁴ datacenterdynamics.com/en/news/tsmc...
⁵ wsj.com/tech/a-criti...
⁶ , ⁹ cset.georgetown.edu/publication/...
⁷ intelligence.org/wp-content/u...
⁸ x.com/ESYudkowsky/...
¹⁰ arxiv.org/pdf/2511.10783
My hope, in writing this, is to wake people up a bit faster. If you share that hope, maybe share this post, or join the conversation about it; or write your own, better version of a "wake-up" warning. Don't give up on the world so easily.
Policies that prevent human extinction are good for liberal democracies and for authoritarian regimes, so clueful people on all sides will endorse those policies.
The question, again, is just whether people will clue in to what's happening soon enough to matter.
The CCP is a US adversary. That doesn't mean they're idiots who will destroy their own country in order to thumb their nose at the US. If a policy is Good, that doesn't mean that everyone Bad will automatically oppose it.
The pitch "We can't let China beat us at Russian roulette!" is not very compelling. Even if you suspect China might be unwilling to make a deal, there's zero cost to making an attempt. And the US has already expressed some interest in brokering an international agreement as well:
And, quoting The Economist:¹⁵
A: The CCP has already expressed interest in international coordination and regulation on AI. E.g., Reuters reported that Chinese Premier Li Qiang said, "We should strengthen coordination to form a global AI governance framework that has broad consensus as soon as possible."¹⁴
So the nuclear analogy is pretty limited in what it can tell us. But it can tell us that international law and norms have enormous power.
- Q: But what about China? Surely they’d never agree to an arrangement like this.
Going from "zero superintelligences" to "one superintelligence" is already lethally dangerous. The challenge is to block the construction of ASI while there's still time, not to limit proliferation after it already exists, when it's far too late to take the steering wheel.
... and were instead facing a world where dozens or hundreds of nations possess nuclear weapons.
When it comes to superintelligence, anyone building "god-like AI" is likely to get us all killed — whether the developer is a military or a company, and whether their intentions are good or ill.
By analogy, nuclear nonproliferation efforts haven’t been perfectly successful. Over the past 75 years, the number of nuclear powers has grown from 2 to 9. But this is a much more survivable state of affairs than if we hadn’t tried to limit proliferation at all...
If instead a tiny fraction of the world is trying to find sneaky ways to build a small researcher-starved frontier AI project here and there, while dealing with enormous international pressure and censure, then that seems like a much more survivable situation.
... (and no, I don't think this is realistic in the current landscape)...
... that chance increasingly goes out the window as the race heats up, because prioritizing safety will mean sacrificing your competitive edge.
If the whole world is racing to build superintelligence as fast as possible, then we’re very likely dead. Even if you think there's a chance that cautious devs could stay in control as AI starts to vastly exceed the intelligence of the human race...
A: It’s very rare for countries (or companies!) to deliberately violate international law. It’s rare for countries to take actions that are widely seen as serious threats to other nations’ security. (If it weren't rare, it wouldn't be a big news story when it does happen!)
- Q: But surely there will be countries that end up defecting from such an agreement. Even if you’re right that it’s in no one’s interest to race once they understand the situation, plenty of people won’t understand the situation, and will just see superintelligent AI as a way to get rich quick.
(Some templates of agreements that would do the job have already been drafted.¹³)
Govs can create a deterrence regime by articulating clear limits and enforcement actions. It’s in no country’s interest to race to its own destruction, and a deterrence regime like this provides an alternative path.
Q: But if the US halts, isn’t that just ceding the race to authoritarian regimes?
A: The US shouldn’t halt unilaterally; that would just drive AI research to other countries. Rather, the US should broker an international agreement where everyone agrees to halt simultaneously.
What's left is to dial up the volume on that talk, translate that talk into planning and fast action, and recognize that "there's uncertainty how much time we have left" makes this a more urgent problem, not less.
At that point, the cat has already firmly left the bag. (And it's not as though there's anything unusual about governments heavily regulating powerful new technologies.)
Building superintelligence is unpopular with the voting public,¹² and hundreds of elected officials have already named this issue as a serious priority. The UN Secretary-General and major heads of state are routinely talking about AI loss-of-control scenarios and human extinction.
A: It will require science communicators to alert policymakers to the current situation, and it will require policymakers to come together to craft a solution. But it doesn’t seem at all infeasible.
The typical consumer wouldn’t even necessarily see any difference, since the typical consumer doesn’t run a data center. They just wouldn’t see dramatic improvements to the chatbots they use.
-Q: But isn’t this politically infeasible?
- Q: Isn't this totalitarian?
A: Governments regulate thousands of technologies. Adding one more to the list won’t suddenly tip the world over into a totalitarian dystopia, any more than banning chemical or biological weapons did.