Palantir CEO Karp thinks his AI tech will lessen the power of “highly educated, often female voters, who vote mostly Democrat” while increasing the power of working-class men. Meanwhile @vonderleyen.ec.europa.eu @eppgroup.bsky.social diligently working to dismantle AI regulation. 🤦♀️
13.03.2026 10:12
👍 27
🔁 14
💬 0
📌 0
Large-scale online deanonymization with LLMs
We show that large language models can be used to perform at-scale deanonymization. With full Internet access, our agent can re-identify Hacker News users and Anthropic Interviewer participants at hig...
Delete your socks, people.
"We show that large language models can be used to perform
at-scale deanonymization...at high precision, given pseudonymous online profiles and conversations alone, matching what would take hours for a dedicated human investigator."
04.03.2026 07:41
👍 255
🔁 107
💬 13
📌 16
Pete Hegseth C*ckblocks Anthropic
And OpenAI collects its winnings.
“Let’s not bury the lede here: The DoD wants to use AI for fully autonomous lethal systems and mass surveillance of American citizens. Anthropic said “no,” and now the US government is retaliating by trying to destroy it.” @lizdye.bsky.social @atrupar.com
open.substack.com/pub/aaronrup...
04.03.2026 18:06
👍 507
🔁 179
💬 8
📌 7
email to me with a title: 2027 MSc in Artificial Intelligence Application – Research Interest in Trustworthy Generative AI & Multi-Agent Safety
email body: I have been deeply inspired by your pioneering work on AI accountability, algorithmic harm governance, and ethical alignment of generative multi-modal systems. As Geoffrey Hinton has repeatedly warned the global community about the existential and structural risks of unregulated AI systems, I have long been searching for actionable, ethical frameworks to translate these high-level warnings into practical, safe AI design — and your research has been the definitive guide for me. In particular, your 2023 paper in Nature Machine Intelligence on the structural risks of large-scale generative models, as well as your AI Accountability Framework developed at the Mozilla Foundation, have fundamentally shaped my core belief: capable AI systems must be built on the premise of safety, transparency, and consistent alignment with human values, rather than pursuing functionality alone.
never published in Nature Machine Intelligence & neither do i have work on "AI Accountability Framework"
i know this is now normal but i want you all to stop & reflect on how much the future is fucked & the only way to mitigate this disaster is to ban/limit this dammed technology
04.03.2026 12:11
👍 130
🔁 39
💬 9
📌 3
I've repeatedly said this in the past: the end goal of all AI is surveillance, specifically to influence and control
02.03.2026 19:33
👍 193
🔁 77
💬 3
📌 6
If this goes on long enough we will eventually evolve not to trust statistical autocomplete software with medical decisions.
01.03.2026 18:42
👍 163
🔁 61
💬 12
📌 1
There is absolutely no reason for Sam Altman to say anything but he is currently digging the biggest hole possible, responding to all and sundry on Twitter. This is the worst possible statement he could have given!
01.03.2026 01:42
👍 919
🔁 167
💬 54
📌 31
Sam Altman
@sama• 5m
Three general things from this AMA:
G...
1. There is more open debate than I thought ther ewould be, at least in this part of Twitter, about whether we should prefer a democratically elected government or unelected private companies to have more power. I guess this is something people disagree on, but...! don't. This seems like an important area for more discussion.
2. I think the is a question behind a lot of the questions but I haven't seen quite articulated:
What happens if the government tries to nationalize OpenAl or other Al efforts? I obviously don't know; I have thought about it of course (it has seemed to me for a long time it might be be better if building AGI were a government project) but it doesn't seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important.
3. People take their safety (in the national security sense) more for granted than I realized, which I think is a good thing on balance but I don't think shows enough respect to the tremendous work it takes for that to happen.
Also, I am on the whole very grateful for the level of reasonable and good-faith engagement here.
It was not what I expected.
Sam Altman just - apropos of nothing - brought up the government nationalizing OpenAI. Multiple typos. Clammy Sammy’s having a big night of posting!
01.03.2026 01:47
👍 621
🔁 62
💬 27
📌 20
OpenAI posted the terms of the deal. Reveals that it absolutely does allow for domestic surveillance. EO 12333 is how the NSA hides its domestic surveillance by capturing communications by tapping into lines *outside the US* even if it contains info from/on US persons.
openai.com/index/our-ag...
01.03.2026 05:20
👍 2640
🔁 1159
💬 30
📌 72
The world could be such a nice place if we allowed it. It's all so goddamn unnecessary. There's no need for any of it. It's so beautiful here. It should be so cool to be alive
28.02.2026 12:42
👍 16437
🔁 5095
💬 128
📌 103
Fun fact: The "AI-boosted growth rate" is basically just the continuation of the usual growth rate. So even if AI doesn't do much and we just have usual growth they'll still claim it to be the "AI".
26.02.2026 13:44
👍 98
🔁 19
💬 3
📌 0
AIs can’t stop recommending nuclear strikes in war game simulations
Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases
AIs can’t stop recommending nuclear strikes in war game simulations
Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases
www.newscientist.com/article/2516...
25.02.2026 12:27
👍 3053
🔁 1297
💬 395
📌 1481
As @abeba.bsky.social was just saying yesterday: they're not actually good at summarizing
24.02.2026 20:09
👍 66
🔁 17
💬 2
📌 0
We got them DENSE BRAINS baby
24.02.2026 11:18
👍 506
🔁 77
💬 13
📌 7
I could write for days, months, years.
I could read decades of sci-fi from the most annoying Golden Age authors.
I could peruse the most cringe things on TikTok.
And still, I could never come up with the deeply embarrassing things Sam Altman says on any given Tuesday
21.02.2026 21:29
👍 250
🔁 54
💬 16
📌 3
Whale 16th–17th century, Chumash. California. (Met Museum)
21.02.2026 10:25
👍 295
🔁 78
💬 1
📌 7
I'm pretty sure I know. They aren't. We're not going to really be able to make a lot of progress in dealing with the implications of this tech unless and until we get rid of all this "woo-woo" talk about LLMs. Anthropic pushing this line is PR, unserious.
13.02.2026 14:15
👍 1872
🔁 334
💬 81
📌 108
That's right! Every single "AI tried to deceive us" story is entirely a result of directly training the models to respond in a certain way to create this specific reaction. This era stinks
www.wheresyoured.at/ai-mythbuste...
13.02.2026 15:15
👍 638
🔁 135
💬 15
📌 2
Happy birthday to one of my favourite haters, Charles Darwin
12.02.2026 16:31
👍 10352
🔁 3081
💬 162
📌 419
This is a really fucking stupid and irresponsible thing for a widely-read magazine to publish. Claude is a large language model with an unusually large context window. It is not self-aware, nor is it a new entity. It's a set of rules which run really fast and produce a predictable outcome. Software.
11.02.2026 07:26
👍 69
🔁 18
💬 5
📌 1
A great blue heron is standing in a marsh reading a book. The book says "They lived at opposite ends of the pond. From afar, she admired his long plumes and excellent fishing technique." The heron gets more flustered as she reads: "One day he came flying toward her. Watching him approach, she wondered: What did he want? And was he... carrying something in his beak?" The heron gets even more flustered as she reads: "Her heart skipped a beat as he landed beside her. Fluffing his neck feathers, he said those three special words:" The heron falls backwards in a swoon as she reads, "Here's a stick."
Since Valentine's Day is approaching, I'm reposting some of my past Valentine's comics.
08.02.2026 14:38
👍 1243
🔁 351
💬 14
📌 10
a round pottery piece with pig eyes and a snout
i think we all need to step back and realize that peak art was made when neolithic pot in the shape of a pig was fired
06.02.2026 19:46
👍 10765
🔁 2688
💬 9
📌 152
Photo by G. Solecki/A. Piętak of a small figurine of a bear carved out of amber between 9600 and 4100 BC. The amber is a deep translucent orange. The display lighting makes it glow in places. The bear's head is carved to show ears, mouth, nostrils and eyes. A hole runs through the bear’s torso, suggesting it was threaded onto a cord. Dimensions: Length 10.2 cm, Height 4.2
It was discovered in Słupsk during peat mining in 1887.
According to the museum catalogue ‘’Shortly after its discovery, the figure underwent conservation work to restore its original appearance as it was covered with a layer of dull patina from the exposure to the minerals contained in the peat. Already at that time, at the end of the 19th century, it was assumed the restoration had gone too far. The figure was stripped entirely of patina, the anatomical features of the animal were emphasised, the eyes and nostrils were sharply drawn, and the amber was carefully polished”.
In 2013, a competition was organised by the Education Department of the National Museum in Szczecin, for children to choose a name for the bear. The winning name was ‘Słupcio’,
A little bear figurine carved out of amber some 6,000 years ago 🐻❤️
A hole runs through the bear’s torso suggesting it was threaded on a cord, perhaps worn or carried as a protective charm.
Found in a peat bog near Słupsk, Poland, in 1887.
📷 National Museum in Szczecin
#FindsFriday
#Archaeology
06.02.2026 08:04
👍 1465
🔁 329
💬 27
📌 65
We learned so much from @mmelkersen.bsky.social when she was on the pod!
06.02.2026 14:52
👍 46
🔁 13
💬 0
📌 0
For the FES, I wrote a short brief about how mainstream party strategies have fueled far-right success. They move toward more anti-immigration positions to win voters back. This does not work, but shifts public opinion to the right. Parties then react to shifts in public opinion. A vicious cycle.
06.02.2026 08:04
👍 342
🔁 148
💬 3
📌 16
Ever since the Stochastic Parrots paper was published, I've been fielding the question "How do I know that you're not (just) a stochastic parrot?" ... one that I find inherently dehumanizing, as I write about here:
journals-sagepub-com.offcampus.lib.washington.edu/doi/10.1177/...
Short 🧵>>
03.02.2026 00:56
👍 229
🔁 43
💬 12
📌 3
31.01.2026 18:46
👍 916
🔁 288
💬 5
📌 11
This is infuriating. We've known that these datasets are rife with CSAM since at least 2023, and those are the ones that are public.
stacks.stanford.edu/file/druid:k...
30.01.2026 17:31
👍 198
🔁 84
💬 4
📌 3