Louis Barclay's Avatar

Louis Barclay

@louisbarclay

Senior fellow @ Mozilla, founder at https://attention.to and https://pik.top. About: https://louis.work

523
Followers
154
Following
70
Posts
23.01.2024
Joined
Posts Following

Latest posts by Louis Barclay @louisbarclay

Center for the Alignment of AI Alignment Centers: Our pivot to reportless reporting

Center for the Alignment of AI Alignment Centers: Our pivot to reportless reporting

Condensing the message

PREVIOUSLY:

CAAAC report: “Our expectation is that in 5-7 years, AGI capabilities, backed by 1000x compute vs. present day levels, will be advanced enough to replace most human workers and increasingly be operationalized in civil-control contexts.”

UNDER REPORTLESS REPORTING:

Open the window and scream:
FUUUUUUUUUUUUUUCK.

Condensing the message PREVIOUSLY: CAAAC report: “Our expectation is that in 5-7 years, AGI capabilities, backed by 1000x compute vs. present day levels, will be advanced enough to replace most human workers and increasingly be operationalized in civil-control contexts.” UNDER REPORTLESS REPORTING: Open the window and scream: FUUUUUUUUUUUUUUCK.

🚀 Here at the Center for the Alignment of AI Alignment Centers we’re pleased to announce our pivot to reportless reporting

🧠 This is a continuation of our meta-work as an innovator of cutting-edge best practices for all impact-led organizations

Read more:
directing.attention.to/p/the-center...

12.02.2026 14:51 👍 3 🔁 0 💬 0 📌 0

You're not being paranoid for thinking everyone is writing with AI — you're just being realistic.

19.12.2025 11:48 👍 1 🔁 0 💬 0 📌 0
Preview
How to Tell If AI is Writing This Read our humor column on AI writing, in collaboration with The Onion. There’s no definitive test of AI writing, but there are things you can do to investigate, according to our columnist Louis Barclay...

Did I write these words with AI? How would you be able to tell?

I delve into this problem in my new @mozilla.org column (in collab with @theonion.com):
www.mozillafoundation.org/en/nothing-p...

19.12.2025 11:48 👍 1 🔁 0 💬 1 📌 0

Help me out here, but whatever you do, DON'T TELL MARCOS 🤫

The Secret Santa reveal is on Sunday, let's see if we can get data for every single train in the world by then

Should be doable

17.12.2025 12:34 👍 0 🔁 0 💬 0 📌 0
Preview
GitHub - louisbarclay/marcos Contribute to louisbarclay/marcos development by creating an account on GitHub.

I need your help to crowdsource the data, so we can make MARCOS amazing, and give Marcos a nice Secret Santa gift:

github.com/louisbarclay...

17.12.2025 12:34 👍 0 🔁 0 💬 1 📌 0

However, MARCOS has a dirty little secret: it's empty. There's no data yet

17.12.2025 12:34 👍 0 🔁 0 💬 1 📌 0

MARCOS will change the course of humanity, literally - by causing people to go to slightly different parts of trains

17.12.2025 12:34 👍 0 🔁 0 💬 1 📌 0

The database, and a simple tool built on top of it, will save millions of people billions of hours

17.12.2025 12:34 👍 0 🔁 0 💬 1 📌 0

MARCOS is a crowdsourced database of the best doors to exit trains from, to leave stations quickly

17.12.2025 12:34 👍 0 🔁 0 💬 1 📌 0

The guy is called Marcos. He loves trains

The gift is called MARCOS. It's for trains:

Metro
And
Rail
Carriage
Optimization
System

17.12.2025 12:34 👍 0 🔁 0 💬 1 📌 0
Preview
Help me save the world, but don't tell Marcos: a train data Secret Santa Let me explain by breaking down the headline.

🎁 Today, I'm hoping to change the world forever with my Secret Santa gift for a guy I vaguely know from a Discord server:

directing.attention.to/p/help-me-sa...

17.12.2025 12:22 👍 0 🔁 3 💬 1 📌 0
Video thumbnail

Made a site comparing the sizes of living things :)

The great Julius Csotonyi spent 5 months painting over 60 illustrations for the site, no ai used

> neal.fun/size-of-life/

10.12.2025 16:03 👍 2641 🔁 913 💬 78 📌 87

If this is the first article you ever read that ends with 'I Am The Prophet of Doom', you need to get out more

29.11.2025 20:30 👍 1 🔁 0 💬 0 📌 0
Preview
“I’ll never sleep again” Our intern Clem Park writes about her rewarding summer at CAAAC, spent writing scenarios where an AGI enslaves and tortures humanity forever

From our summer intern at the Center for the Alignment of AI Alignment Centers:

"S-risk is the risk that AGI doesn’t kill us all, but instead enslaves and tortures us for eternity (the ‘S’ stands for suffering). It was awesome to learn about it."

directing.attention.to/p/ill-never-...

28.11.2025 14:13 👍 17 🔁 8 💬 2 📌 4
Preview
Superficial Intelligence: Tech Parenting Read our humor column on tech parenting, in collaboration with The Onion. Here's our columnist's best tips on how to raise a child to be a tech billionaire.

⚠️ Are you concerned about your child's future impact on the world?

✅ Steer them towards becoming a tech billionaire, starting NOW.

My latest advice column for @mozilla.org's Nothing Personal, in collab with @theonion.com:

www.mozillafoundation.org/en/nothing-p...

27.11.2025 12:46 👍 3 🔁 0 💬 0 📌 0
DOCTYPE magazine cover with “10 amazing web pages” over a scene of a spaceship flying over pyramids

DOCTYPE magazine cover with “10 amazing web pages” over a scene of a spaceship flying over pyramids

Illustrated characters doing the conga at the bottom of a page

Illustrated characters doing the conga at the bottom of a page

Features the work of
@pauljholden.com
@happytoast.co.uk
@nanochess.bsky.social
@edent.tel
@sil.kryogenix.org
@chromakode.com
@louisbarclay.bsky.social
@jimmeruk.bsky.social
@monkeon.co.uk
@gingerbeardman.com
@bagpuss.org

11.11.2025 12:04 👍 13 🔁 1 💬 1 📌 0
Preview
Superficial Intelligence - A Humor Column from Mozilla Foundation Read the advice column from Nothing Personal and The Onion, where we find 100% tech-driven solutions to all of humanity's problems.

My new Mozilla column has a simple, urgent call-to-action tucked inside:

👉 Get your grandma on Signal

CC @signal.org @meredithmeredith.bsky.social

www.mozillafoundation.org/en/nothing-p...

15.10.2025 19:45 👍 24 🔁 4 💬 4 📌 0

Please disseminate widely to raise awareness and safeguard humanity's future

Be aware that failing to repost this critical research will cause the equivalent of 300 sinkings of the Titanic, except with Rose also dying this time round

09.10.2025 09:05 👍 1 🔁 0 💬 0 📌 0
Preview
AI researcher burnout: the greatest existential threat to humanity? Drawing on multiple interviews in the community of AI researchers and bolstered by mathematical modelling, we find that the future of humanity hinges on whether we can take sufficient urgent action to...

Read the conclusions of our report here:

alignmentalignment.ai/caaac/blog/a...

09.10.2025 09:05 👍 1 🔁 0 💬 1 📌 0

Given how many humans we expect to live until the end of the universe, this means:

1,000,000,000,000,000,000,000,000 potential future humans are murdered every time an AI alignment researcher has a bad day

09.10.2025 09:05 👍 2 🔁 0 💬 1 📌 0

Our calculation is simple:

A burnout rate of 0.001% of AI alignment researchers per year...

Leads to a 0.002% increase in the likelihood of AGI being misaligned...

Thereby increasing the existential risk of AGI to humanity by 0.003%

09.10.2025 09:05 👍 2 🔁 0 💬 1 📌 0
Post image

Is AI researcher burnout the greatest existential threat to humanity?

Our sobering new report suggests the answer is: yes

09.10.2025 09:05 👍 1 🔁 0 💬 1 📌 1

Such a great point

And by the time you've done The Thing, you're so worn out that it's hard to get any kind of energy up to write The Thing's bastard children

06.10.2025 16:13 👍 3 🔁 0 💬 0 📌 0
Preview
To make AI safe, we must develop it as fast as possible without safeguards Ia Magenius explains why we need to make AI as powerful as possible to ensure it can't have power over us

To stop bad actors developing AGI that could kill us all, we need good actors to develop AGI that could also kill us all

Read more:
alignmentalignment.ai/caaac/blog/a...

29.09.2025 14:43 👍 14 🔁 6 💬 0 📌 2
Preview
To make AI safe, we must develop it as fast as possible without safeguards Ia Magenius explains why we need to make AI as powerful as possible to ensure it can't have power over us

When will the world wake up and realize that you can't make AI safe without first making it as unsafe as possible, to see just how unsafe that is?

Please read and share this urgent call to action

alignmentalignment.ai/caaac/blog/a...

23.09.2025 15:28 👍 6 🔁 0 💬 2 📌 0

If you care at all about the future of our species, the future of your family, the future of that one colleague you’re hoping gets replaced by an AI agent ASAP — join us.

11.09.2025 13:28 👍 56 🔁 5 💬 3 📌 1

Fiercely independent, we are backed by philanthropic funding from the world's biggest AI companies who also form a majority on our board.

11.09.2025 13:28 👍 97 🔁 12 💬 2 📌 0

In a world clouded by AI uncertainty, we’ve arrived to lead the way forward for humanity.

Our goal is simple: subsume all other AI centers into one glorious AI center singularity.

11.09.2025 13:28 👍 59 🔁 7 💬 1 📌 0
Preview
Center for the Alignment of AI Alignment Centers We align the aligners

Q. Who aligns the aligners?
A. alignmentalignment.ai

Today I’m humbled to announce an epoch-defining event: the launch of the 𝗖𝗲𝗻𝘁𝗲𝗿 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 𝗼𝗳 𝗔𝗜 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 𝗖𝗲𝗻𝘁𝗲𝗿𝘀.

11.09.2025 13:17 👍 405 🔁 124 💬 29 📌 44

Sign up to pik.top to follow along, or my blog directingattention.substack.com

25.07.2025 16:02 👍 2 🔁 0 💬 0 📌 0