ControlAI's Avatar

ControlAI

@controlai.com

We work to keep humanity in control. Subscribe to our free newsletter: https://controlai.news Join our discord at: https://discord.com/invite/ptPScqtdc5

438
Followers
13
Following
1,133
Posts
18.11.2024
Joined
Posts Following

Latest posts by ControlAI @controlai.com

Preview
The Exponential AIs are rapidly improving in their ability to code. The consequences of this could be huge.

New tests confirm AIs are rapidly improving in their ability to code. The consequences of this could be huge.

We break down what the trend in AI time horizons means, and how it relates to the industry plan to initiate a dangerous intelligence explosion.
http://controlai.new...

06.03.2026 18:57 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

ControlAI advisor Connor Leahy: We are barelling headlong towards superintelligence.

If we build systems vastly smarter than humans and cannot control them, the future will belong to them, not us.

There is a solution: ban the development of superintelligent AI.

02.03.2026 14:16 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 1
Video thumbnail

What can a country like Canada do to prevent the danger of superintelligent AI?

ControlAI's Samuel Buteau to Canada's Senate Committee on Human Rights: We can solve the coordination problem.

If Canada championed this issue, others would find it much easier to look into it.

01.03.2026 19:36 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

With our contact tools, people have sent over 150,000 messages to their lawmakers asking them to ban superintelligence, which experts warn could cause human extinction.

You should too.

We make it super easy for you, it takes less than a minute!

campaign.controlai.c...

01.03.2026 10:11 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Irresponsible Scaling: Top AI Company Drops Central Safety Pledge A top AI company just dropped a central safety pledge. It couldn’t be clearer that we can’t rely on voluntary commitments to prevent the risk of human extinction that experts warn of.

While top AI company Anthropic is beefing with the Pentagon, what's being ignored is that they just dropped a central safety pledge not to train or deploy AIs capable of catastrophic damage without adequate safeguards in place.

Our latest article:

27.02.2026 20:18 πŸ‘ 3 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Post image

Chris Law MP, the SNP's spokesperson for Business, Trade and International Development, just backed our campaign for binding regulation on the most powerful AIs!

115 UK politicians now support us, recognising the risk of extinction posed by superintelligent AI.

26.02.2026 12:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Top AI Safety Exec LOSES CONTROL Of AI Bot
Top AI Safety Exec LOSES CONTROL Of AI Bot Krystal and Saagar discuss a top AI safety exec losing control of an AI bot. ControlAI: https://controlai.com/about Sign up for a PREMIUM Breaking Points subscriptions for full early access to uncut shows and LIVE AMAs with the hosts every week: https://breakingpoints.locals.com/support Merch St

ControlAI's CEO Andrea Miotti is on Breaking Points!

Andrea joined Krystal Ball and Saagar Enjeti to discuss how we can avoid the threat posed by superintelligence.

AI CEOs say that what they're building could wipe us out. Let's take them at their word and ban it.

25.02.2026 20:01 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Anneliese Dodds MP (Anneliese Dodds MP), who has served as shadow chancellor and as a minister, just backed our campaign!

Over 100 UK politicians now recognise the risk of extinction posed by superintelligent AI, supporting our call for binding regulation on the most powerful AIs.

25.02.2026 14:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

NEW: Chris Bloore MP just backed our campaign, joining over 100 UK politicians in recognising the risk of extinction posed by superintelligent AI and calling for binding regulation on the most powerful AI systems.

Incredible to see this coalition keep growing!

24.02.2026 15:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
The Case for a Global Ban on Superintelligence (with Andrea Miotti)
The Case for a Global Ban on Superintelligence (with Andrea Miotti) Andrea Miotti is the founder and CEO of Control AI, a nonprofit. He joins the podcast to discuss efforts to prevent extreme risks from superintelligent AI. The conversation covers industry lobbying, comparisons with tobacco regulation, and why he advocates a global ban on AI systems that can outsmar

Superintelligence poses a risk of human extinction, yet most people have never even heard about this.

On the Future of Life Institute Podcast, ControlAI's CEO Andrea Miotti (Andrea Miotti) explains the danger and what governments can do to prevent it.

23.02.2026 11:38 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

"If you build superintelligence, it's going to be in charge, not us."

AI expert Anthony Aguirre says trying to stay in control when faced with superintelligence would be like a classroom of kindergartners trying to be in charge of themselves when Musk, Obama or Putin walks in.

22.02.2026 19:38 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

Cosmologist and AI expert Anthony Aguirre: Most people don't want AIs that will replace them and replace humanity. People don't want superintelligence, but that's what's being developed.

22.02.2026 10:42 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

"All of the warning lights are flashing red right now."

Professor Stuart Russell, author of the authoritative textbook on AI used at over 1500 universities, describes how tests have shown AIs are willing to blackmail and kill in order to preserve themselves.

21.02.2026 10:20 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

OpenAI CEO Sam Altman says superintelligence may be built in just a couple of years, and we obviously urgently need regulation, as we have for other powerful technologies.

He says the world may need an IAEA for AI.

OpenAI has lobbied against AI regulation in the US and the EU.

20.02.2026 12:25 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
When AIs Can Tell They’re Being Watched "Looks safe" is no longer reassuring. AIs know they're being tested, and they're changing how they behave.

AIs can tell they're being tested for dangerous behaviors, and it changes how they act.

We dig into evaluation awareness, why we're less able to rely on tests, and what this means. Plus: this week's AI news.

All in our latest article!

19.02.2026 19:07 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
UK unemployment soars: is AI already taking our jobs?
UK unemployment soars: is AI already taking our jobs? In a week when a convincingly lifelike AI video of Tom Cruise and Brad Pitt slugging it out went viral and caused a meltdown in Hollywood, unemployment stats in the UK have hit a five-year high with young people the biggest losers. [Subscribe to our Substack newsletter: https://channel4news.substac
18.02.2026 18:28 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Check out ControlAI CEO Andrea Miotti's (Andrea Miotti) appearance on Channel 4's Fourcast!

Andrea explains how AI companies aim to develop superintelligence, AI to replace humans, which could lead to extinction.

This is a preventable outcome! We can choose another path.

18.02.2026 18:28 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

By prohibiting superintelligence, we can allow beneficial applications to flourish, while preventing the worst risks of AI.

[link below]

17.02.2026 17:16 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Lord Hunt says the UK is uniquely positioned to lead this effort, having played a central role in the Treaty on the Non-Proliferation of Nuclear Weapons and the Chemical Weapons Convention, as well as having led in regulating other powerful technologies like in vitro fertilization.

17.02.2026 17:16 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

While there are challenges to international agreements, Lord Hunt points out that even during the Cold War, countries were able to agree a landmark nuclear de-escalation treaty.

17.02.2026 17:16 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

"Most importantly, the potential consequences worldwide are so threatening that we cannot abdicate our responsibility to seek international agreement to mitigate these risks."

17.02.2026 17:15 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Citing warnings by leading AI experts that superintelligent AI could catastrophically and irreversibly evade human control, or end up "even destroying all life on Earth", Lord Hunt says this prohibition must be international.

17.02.2026 17:15 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Why we need a moratorium on superintelligence research Opinion: As the AI community gathers in India, Lord Hunt of Kings Heath argues that the UK must spearhead a pause on the development of the world’s most advanced AI models.

A great piece by Lord Hunt of Kings Heath in Transformer today!

Lord Hunt, who was the first chief executive of the NHS Confederation, argues that a prohibition on superintelligence is not only possible, but necessary.

Thread 🧡

17.02.2026 17:15 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

"We are already seeing that they're reacting negatively when they see that they would be replaced by a new version."

AI godfather and Turing Award winner Yoshua Bengio describes testing that shows that AIs are willing to engage in blackmail to avoid being replaced.

17.02.2026 11:31 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
What We Learned from Briefing 140+ Lawmakers on the Threat from AI So ControlAI kept talking to lawmakers...

We got over 100 UK politicians to back our campaign for binding regulation on the most powerful AIs and recognize the risk of human extinction posed by superintelligence.

Want to know how we did it?

Check out Leticia's article in our newsletter!

15.02.2026 10:18 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

"It's not exactly reassuring."

Novara Media's Michael Walker on Anthropic's tests that show AIs are willing to blackmail and kill to avoid being shut down.

14.02.2026 16:31 πŸ‘ 2 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Preview
β€œReady to Kill” Top AI company policy chief says their AIs have β€œextreme reactions” to being shut down, even showing a willingness to kill. Here’s what that means.

A clip we made of top AI company Anthropic's UK policy chief saying that it's "massively concerning" that their AIs have shown in tests that they're willing to blackmail and kill to avoid being shut down just went viral.

Here, we explain what that means:

13.02.2026 18:30 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Careers | ControlAI At ControlAI we are fighting to keep humanity in control.

ControlAI is HIRING!

We've figured out what needs to be done, now we’re scaling to win.

If you care about preventing AI extinction risk, and are interested in working in London, go check them out!

Roles in policy, media, and creator outreach.

13.02.2026 16:07 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
What We Learned from Briefing 140+ Lawmakers on the Threat from AI So ControlAI kept talking to lawmakers...
12.02.2026 18:37 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Following the positive reception of her post β€œWhat We Learned from Briefing 70+ Lawmakers on the Threat from AI”, she is now publishing an update with additional lessons learned, along with encouraging news about how parliamentarians are stepping up to address superintelligence.

12.02.2026 18:37 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0