@mmitchell
Researcher trying to shape AI towards positive outcomes. ML & Ethics +birds. Generally trying to do the right thing. TIME 100 | TED speaker | Senate testimony provider | Navigating public life as a recluse. Former: Google, Microsoft; Current: Hugging Face
No discussion of tech media can get past this basic traffic fact: in the AI world, Google and social no longer refer traffic, which means that the vast majority of readers just never find you in the first place. Analysis: growtika.com/blog/tech-m...
U.S. Military Announces Plan To Consolidate All Wars Into Final, Epic Battle
U.S. Military Announces Plan To Consolidate All Wars Into Final, Epic Battle https://theonion.com/u-s-military-announces-plan-to-consolidate-all-wars-in-1824018300/
snip from WSJ piece on Grok demand not being high unless you're trying to pose as a bad actor
Demand for Grok within some fed agencies not that high, unless you're trying to act like a bad actor for defensive testing: www.wsj.com/politics/nat...
NEW! Mystery AI Hype Theater 3000: How the War Department Learned to Stop Worrying and Love AI
@naomiaklein.bsky.social joins @emilymbender.bsky.social & @alexhanna.bsky.social to discuss how AI boosters & the US military are engaged in a lethal love affair.
www.buzzsprout.com/2126417/epis...
I talk more about a positive vision for recommendation, and why I think it is a necessary tool for the Internet to truly live up to its potential as an engine for equity, most explicitly here: md.ekstrandom.net/talks/2025/v...
SP was about harms, including harms from data. It was a harms and risks paper about LLMs.
I get that people want to make it about something else, and use that to attack the authors, but it’s already published — you can directly see it…
I dunno how many times someone has told me, “the stochastic parrots paper was wrong,” and i’ve had to stutter, regain my composure, and put in real work to salvage the conversation. Like lol, wrong about what? That coherence is in the eye of the beholder? We see things as we are, not as they are.
AWESOME piece deconstructing some (crummy) ethical AI arguments.
tante.cc/2026/02/20/a...
This is really important/relevant from the perspective of how power is centralized in AI.
@ainowinstitute.bsky.social in particular has had some great scholarship on the role of Cloud infrastructure in AI:
ainowinstitute.org/publications...
Come work with me! We are recruiting for a Head of Comms and Marketing to help us communicate the work of @britishacademy.bsky.social and the power of the humanities and social sciences in making sense of our changing world
app.loxo.co/job/MzM2MzQt...
Annnnd now I can never use a coffee maker in a hotel room again.
The political effects of X's feed algorithm https://doi.org/10.1038/s41586-026-10098-2 Received: 16 December 2024 Accepted: 4 January 2026 Published online: 18 February 2026 Open access • Check for updates Germain Gauthier,5, Roland Hodler?5, Philine Widmer35 & Ekaterina Zhuravskaya3,4,5 m Feed algorithms are widely suspected to influence political attitudes. However, previous evidence from switching off the algorithm on Meta platforms found no political effects'. Here we present results from a 2023 field experiment on Elon Musk's platform X shedding light on this puzzle. We assigned active US-based users randomly to either an algorithmic or a chronological feed for 7 weeks, measuring political attitudes and online behaviour. Switching from a chronological to an algorithmic feed increased engagement and shifted political opinion towards more conservative positions, particularly regarding policy priorities, perceptions of criminal investigations into Donald Trump and views on the war in Ukraine. In contrast, switching from the algorithmic to the chronological feed had no comparable effects. Neither switching the algorithm on nor switching it off significantly affected affective polarization or self-reported partisanship. To investigate the mechanism, we analysed users' feed content and behaviour. We found that the algorithm promotes conservative content and demotes posts by traditional media. Exposure to algorithmic content leads users to follow conservative political activist accounts, which they continue to follow even after switching off the algorithm, helping explain the asymmetry in effects. These results suggest that initial exposure to X's algorithm has persistent effects on users' current political attitudes and account-following behaviour, even in the absence of a detectable effect on partisanship.
A new paper shows that less than 2 months of exposure to Twitter’s algorithmic feed significantly shifts people’s political views to the right.
Moving from chronological feed to the algorithmic feed also increases engagement.
This is one of the most concerning papers I’ve read in awhile.
The framing of this episode is extremely silly ("Is she a prophet... or is she just wrong?"): That is the oddest dichotomy I've encountered in a while. I'm a scholar, carefully observing what's going on now through the lens of my expertise, and sharing what I see.
>>
This piece is coming out at a time when AI dissatisfaction is rising, not as a partisan issue, but as a social one. LLMs specifically are pissing everyone off because of how they're being rolled out.
I'm saying this as someone who is in apolitical tech spaces like gaming and home networking.
"if human beings are nothing more than machines — stochastic parrots — then it's easy to justify replacing us with other machines that can do certain things better and faster."
I'm not religious, but do appreciate thoughtful insights. Nice article in this vein:
www.ncronline.org/opinion/lent...
BREAKING: The Department of Education has ended its directive that attempted to restrict diversity, equity, and inclusion efforts in schools nationwide.
This is a victory for academic freedom and education equity.
post by Rutger Bregman (@rutgerbregman.com) reads: Absolutely brilliant piece about the Left's TOTAL blindness on AI. Their dismissal of AI risks mirrors how climate deniers treat CO2. Will probably get a lot of nastiness for this on Bluesky, but I guess that's part of the same problem.
i shouldn’t give this piece any more attention than it has already garnered but i feel like it is worth pointing out some flaws in the argument/unquestioned assumptions
thread 1/
Effective Altruism -- not ethical.
Grateful to have been selected as a member of the @adalovelaceinst.bsky.social Oversight Board. Ada plays a unique and critical role in the tech world, championing people-centered approaches to AI development.
Couldn't be more excited to support and contribute to their important work.
Wasn’t me!!
A few people (Emily included) have pointed out that “independent scholar” might be a more appropriate term.
(Not for me, though, unfortunately, I am in tech and am part of tech dev overall, so am not rightly “independent”—I suppose I’m just a non-hegemonic scholar.)
lol, that may indeed be part of the point. =)
I’m a big fan of historical footnotes for those interested in exploring the human side of the larger-than-life tales.
s/dictate/analyze/
And you're right that the term stochastic parrots is not derogatory but you are 100% wrong when you claim that we meant it to be.
>>
Hey Benjamin, you're getting some suff wrong here, starting with framing me and my colleagues as "AI skeptics". It's true that we call BS on claims of AI, AGI, LLMs understanding etc. But "AI skeptic" is a term that resides within the AI booster's frame of view, not ours.
>>
Yikes, there’s some offensive stuff in here about me and my other co-authors of the SP 🦜 paper. Worth revisiting your assumptions (declarations) about us on that one.
Co-author of the Stochastic Parrots 🦜 paper here. Confirming it is not derogatory. Seconding everything else @emilymbender.bsky.social articulates here.
(If you want to believe it’s derogatory, it’s worth reflecting on how that speaks to your own beliefs.)
Can’t believe we are still doing this
The "(no ext reference)" just gets me. 😆