Netflix is getting into the AI business with Ben Affleck.
@rcmeg
Director, Program in Rhet/Comp, U of Arkansas English prof Writing about #WPALife and writing pedagogy Loves dogs Before: Sonoma State English & Dartmouth Institute for Writing & Rhetoric (views only ever mine, obv) she/her
"Anthropic has much more in common with the Department of War than we have differences."
The Massachusetts Department of Correction has been fighting me for months, trying to withhold data about its collaboration with ICE.
Yesterday, I won that fight, and obtained records showing that the state has transferred more than 2,000 people into ICE custody since 2009.
A quick thread:
A folk logic about it is starting to develop. People may not understand the monopolies and geopolitics of renewable energy at a gradual level. But they can see data centers, hear data centers, and read their electric bill statements. They are angry.
Caesar's Palace says it will introduce a "trusted contact feature" that will alert your loved ones when you run out of money and need a stake just so you can get back to even and then quit, you promise
SCOOP: Anthropic was among the AI companies that submitted a proposal earlier this year to compete in a $100 million Pentagon prize challenge to produce technology for voice-controlled, autonomous drone swarming, acc to people familiar w/ matter.
And everyone is running around praising them for being "ethical"
www.bloomberg.com/news/article...
IMO, the majority of pro-AI arguments made by academics are about the economics of working in a neoliberal university. The constant call is to do more with less--less time, less funding, less peace of mind. AI promises to "solve" these problems, but I'd argue it'll actually intrench them further.
anyway, here is one of the founders of Grammarly, telling people that on the one hand, Big Tech cannot be trusted with the safety of users, but that human beings must go all-in on AI nevertheless. I guess it makes sense to him in his head.
www.youtube.com/watch?v=AMea...
There's a lot of stuff that's gross about AI tools, but this is near the top of the fuckin list
At the end of @theguardian.com articles there's a banner explaining how they don't have an owner so they cannot possibly be bought by oligarchs. They forget to mention that a big 'partnership' with OpenAI motivates advertorial articles like this. They HAVE been bought.
It's not even just a single article. It's a whole newsletter series.
Crucially, this lovers spat between Anthropic and Pentagon must not trick you into labelling Dario Amodei a righteous purveyor of resistance tech. Do not draw your boundary of good and evil between Anthropic and OpenAI. Get real!
Text that reads, "what if, rather than talking about AI broadly, which includes a wide range of technologies that have existed long before ChatGPT’s 2022 launch and that have a variety various functionalities, purposes, and implications, we use more specific terms like “(text/image/code) generative AI,” “LLMs,” or “chatbots”? What if rather than talking about “AI” writing, we identified LLM outputs as “synthetic text,” “synthetic media,” or simply “output”? What if we stopped saying that LLMs can “read” or “think,”—which they can’t—and instead described what is occurring in these moments as “processing”? What if, rather than “hallucination,” we used “inaccuracy,” “error,” “misinformation,” or even “disinformation”? How might we, as rhetoricians and as computers and writing scholars, use our expertise to more critically study the discourses and rhetorics that are used to discuss these products, in ways that go beyond isolated experiences and single use cases, to analyze the broader social, political, and global contexts in which generative AI is embedded, including how it might function to “reinforce dominant ideologies and power structures”? And how might we then build systems and infrastructures that meaningfully take up what we find from such analyses?"
In this talk, I interrogated the use of the word "critical" in conversations about generative AI in education, and argue for care and precision in how we talk about these products. wacclearinghouse.org/docs/proceed...
Check out the full Proceedings here: wacclearinghouse.org/.../cw2025/p...
Boosters love to say that people need to "understand" LLMs. I don't think people should waste their time learning about specific architectures, or "testing" these garbage products. I *do* think we should understand why LLMs don't "reason", and how we trick ourselves into thinking that they do.
Too many “middle ground” AI arguments—“I have concerns, too, but we have to adapt”—proceed from what is to me a peculiar embrace of “inevitability” which seems to be magical thinking, a way of depoliticizing the political, of self-soothing in the face of an overwhelming challenge.
We are really going to regret the technology we have built.
Cover of the 2025 Proceedings of the Annual Computers and Writing Conference
What's "Critical" about "Critical AI"? A Recommitment to Humanistic Inquiry in the Ostensible March to Hyper-Automation
The #cwcon25 Proceedings are here, and I'm grateful for the inclusion of my keynote, "What's 'Critical' about 'Critical AI'? A Recommitment to Humanistic Inquiry in the Ostensible March to Hyper-Automation."
Thank you to the editors and peer reviewers for making this Proceedings possible!
is probably the most evil tech company in the world, which, among other things, powers the ICE abductions of members of our communities. I can go on and on. I have zero good things to say about them. If I hated OpenAI more than Google, I hate Anthropic more than OpenAI.
I have nothing good to say about Anthropic just like I have nothing good to say about Muskrat during his spat with the orange man, or any of the orange man's associates when he turns on them. Anthropic knowingly partnered with the pentagon. Their other partner, Palantir,...
Head of CBS News (and, soon, CNN) taunting Zohran and cheerleading Israel/Trump’s war on Iran almost nobody wants
In case there’s still any question about the agenda behind these networks now
Sam Altman says OpenAI has signed a deal with the DoD. Worded to sound different but on my first reading, “reflects them in law and policy” isn’t different to saying “any lawful use”
Great news, they’ve figured out how to make AI be even more cartoonishly racist and discriminatory:
"Rejecting or resisting a commercial technology designed to attempt a mass wealth transfer and to erode public institutions is a valid political position." - @bcmerchant.bsky.social
I would like to thank Companion AI for helping promote my latest in @mcsweeneys.net. I had worried my satire was too dark. This Einstein product reassures me that it was optimistic sunshine.
www.mcsweeneys.net/articles/the...
There are so many kinds of AI-washing happening right now:
* Companies claiming they're using AI to try and boost their valuations
* CEOs saying AI is why they're firing people
* Traders who have long known the market is over-valued using AI as an excuse to finally sell
The lesson from the "Tom Cruise/Brad Pitt AI Video" reveal is twofold, old, and evergreen:
1) Slow Down
and 2) always start from the ground position that any claims made by an "AI" corporation— any for-profit corporation, really— is them lying to you to hype their product.
OpenAI is a menace. Two recent stories make that clearer than ever.
One day you have Sam Altman denigrating humanity to defend AI. The next, WSJ reveals OpenAI could have alerted Canadian police of a potential mass shooter, but refused pressure from employees. That person went on to kill 8 people.
Friends, I am hiring two scholar-advisors this year! Come join the best advising team in the country (for real). Please help me spread the news about this great opportunity! This is a permanent, non-time limited instructional staff position, and I will be happy to answer questions!
Maybe we shouldn't put the AI in email
To: Jeffrey epstein[jeevacation©gmail.com] From: roger schank Sent: Mon 1/4/2010 12:15:13 PM Subject: there is a simpler explanation about women and intelligence intelligence comes about in part from real focus (goal-directed (this is why you have the absent minded professor caricature) it is a rare woman who is not first and foremost focussed on what thinking and feeling about her hard to be brilliant if you are worrying if you look fat or why hates you or why you dont own a kelly bag roger schank http://www.rogerschank.com/
Relevant to today's conversation about AI's inherent sexism, here's an email from cognitive psychologist and early AI theorist Roger Schank, arguing to Epstein that women can't be truly intelligent, because they care too much about what other people think.