Megan McIntyre's Avatar

Megan McIntyre

@rcmeg

Director, Program in Rhet/Comp, U of Arkansas English prof Writing about #WPALife and writing pedagogy Loves dogs Before: Sonoma State English & Dartmouth Institute for Writing & Rhetoric (views only ever mine, obv) she/her

1,817
Followers
529
Following
186
Posts
06.08.2023
Joined
Posts Following

Latest posts by Megan McIntyre @rcmeg

Preview
Ben Affleck Quietly Founded a Filmmaker-Focused AI Tech Company. Netflix Just Bought It. Netflix is getting into the AI business with Ben Affleck.

Netflix is getting into the AI business with Ben Affleck.

05.03.2026 21:00 👍 13 🔁 12 💬 20 📌 50

"Anthropic has much more in common with the Department of War than we have differences."

06.03.2026 04:55 👍 77 🔁 36 💬 3 📌 8

The Massachusetts Department of Correction has been fighting me for months, trying to withhold data about its collaboration with ICE.

Yesterday, I won that fight, and obtained records showing that the state has transferred more than 2,000 people into ICE custody since 2009.

A quick thread:

04.03.2026 16:01 👍 1865 🔁 750 💬 17 📌 39

A folk logic about it is starting to develop. People may not understand the monopolies and geopolitics of renewable energy at a gradual level. But they can see data centers, hear data centers, and read their electric bill statements. They are angry.

03.03.2026 22:26 👍 1355 🔁 253 💬 17 📌 22

Caesar's Palace says it will introduce a "trusted contact feature" that will alert your loved ones when you run out of money and need a stake just so you can get back to even and then quit, you promise

03.03.2026 23:16 👍 125 🔁 32 💬 3 📌 1
SCOOP: Anthropic was among the AI companies that submitted a proposal earlier this year to compete in a $100 million Pentagon prize challenge to produce technology for voice-controlled, autonomous drone swarming, acc to people familiar w/ matter.

SCOOP: Anthropic was among the AI companies that submitted a proposal earlier this year to compete in a $100 million Pentagon prize challenge to produce technology for voice-controlled, autonomous drone swarming, acc to people familiar w/ matter.

And everyone is running around praising them for being "ethical"

www.bloomberg.com/news/article...

03.03.2026 23:28 👍 238 🔁 73 💬 4 📌 8

IMO, the majority of pro-AI arguments made by academics are about the economics of working in a neoliberal university. The constant call is to do more with less--less time, less funding, less peace of mind. AI promises to "solve" these problems, but I'd argue it'll actually intrench them further.

03.03.2026 15:36 👍 84 🔁 28 💬 6 📌 3

anyway, here is one of the founders of Grammarly, telling people that on the one hand, Big Tech cannot be trusted with the safety of users, but that human beings must go all-in on AI nevertheless. I guess it makes sense to him in his head.

www.youtube.com/watch?v=AMea...

03.03.2026 14:39 👍 19 🔁 4 💬 2 📌 1

There's a lot of stuff that's gross about AI tools, but this is near the top of the fuckin list

03.03.2026 13:32 👍 42 🔁 13 💬 3 📌 0

At the end of @theguardian.com articles there's a banner explaining how they don't have an owner so they cannot possibly be bought by oligarchs. They forget to mention that a big 'partnership' with OpenAI motivates advertorial articles like this. They HAVE been bought.

02.03.2026 22:15 👍 117 🔁 72 💬 1 📌 0

It's not even just a single article. It's a whole newsletter series.

02.03.2026 21:37 👍 96 🔁 22 💬 7 📌 0

Crucially, this lovers spat between Anthropic and Pentagon must not trick you into labelling Dario Amodei a righteous purveyor of resistance tech. Do not draw your boundary of good and evil between Anthropic and OpenAI. Get real!

03.03.2026 00:46 👍 48 🔁 7 💬 0 📌 0
Text that reads, "what if, rather than talking about AI broadly, which includes a wide range of technologies that have existed long before ChatGPT’s 2022 launch and that have a variety various functionalities, purposes, and implications, we use more specific terms like “(text/image/code) generative AI,” “LLMs,” or “chatbots”? What if rather than talking about “AI” writing, we identified LLM outputs as “synthetic text,” “synthetic media,” or simply “output”? What if we stopped saying that LLMs can “read” or “think,”—which they can’t—and instead described what is occurring in these moments as “processing”? What if, rather than “hallucination,” we used “inaccuracy,” “error,” “misinformation,” or even “disinformation”? How might we, as rhetoricians and as computers and writing scholars, use our expertise to more critically study the discourses and rhetorics that are used to discuss these products, in ways that go beyond isolated experiences and single use cases, to analyze the broader social, political, and global contexts in which generative AI is embedded, including how it might function to “reinforce dominant ideologies and power structures”? And how might we then build systems and infrastructures that meaningfully take up what we find from such analyses?"

Text that reads, "what if, rather than talking about AI broadly, which includes a wide range of technologies that have existed long before ChatGPT’s 2022 launch and that have a variety various functionalities, purposes, and implications, we use more specific terms like “(text/image/code) generative AI,” “LLMs,” or “chatbots”? What if rather than talking about “AI” writing, we identified LLM outputs as “synthetic text,” “synthetic media,” or simply “output”? What if we stopped saying that LLMs can “read” or “think,”—which they can’t—and instead described what is occurring in these moments as “processing”? What if, rather than “hallucination,” we used “inaccuracy,” “error,” “misinformation,” or even “disinformation”? How might we, as rhetoricians and as computers and writing scholars, use our expertise to more critically study the discourses and rhetorics that are used to discuss these products, in ways that go beyond isolated experiences and single use cases, to analyze the broader social, political, and global contexts in which generative AI is embedded, including how it might function to “reinforce dominant ideologies and power structures”? And how might we then build systems and infrastructures that meaningfully take up what we find from such analyses?"

In this talk, I interrogated the use of the word "critical" in conversations about generative AI in education, and argue for care and precision in how we talk about these products. wacclearinghouse.org/docs/proceed...

Check out the full Proceedings here: wacclearinghouse.org/.../cw2025/p...

01.03.2026 14:49 👍 23 🔁 11 💬 0 📌 0

Boosters love to say that people need to "understand" LLMs. I don't think people should waste their time learning about specific architectures, or "testing" these garbage products. I *do* think we should understand why LLMs don't "reason", and how we trick ourselves into thinking that they do.

01.03.2026 19:26 👍 30 🔁 8 💬 3 📌 1

Too many “middle ground” AI arguments—“I have concerns, too, but we have to adapt”—proceed from what is to me a peculiar embrace of “inevitability” which seems to be magical thinking, a way of depoliticizing the political, of self-soothing in the face of an overwhelming challenge.

01.03.2026 16:41 👍 416 🔁 102 💬 16 📌 15

We are really going to regret the technology we have built.

01.03.2026 14:26 👍 1499 🔁 204 💬 32 📌 22
Cover of the 2025 Proceedings of the Annual Computers and Writing Conference

Cover of the 2025 Proceedings of the Annual Computers and Writing Conference

What's "Critical" about "Critical AI"? A Recommitment to Humanistic Inquiry in the Ostensible March to Hyper-Automation

What's "Critical" about "Critical AI"? A Recommitment to Humanistic Inquiry in the Ostensible March to Hyper-Automation

The #cwcon25 Proceedings are here, and I'm grateful for the inclusion of my keynote, "What's 'Critical' about 'Critical AI'? A Recommitment to Humanistic Inquiry in the Ostensible March to Hyper-Automation."

Thank you to the editors and peer reviewers for making this Proceedings possible!

01.03.2026 14:49 👍 16 🔁 7 💬 1 📌 0

is probably the most evil tech company in the world, which, among other things, powers the ICE abductions of members of our communities. I can go on and on. I have zero good things to say about them. If I hated OpenAI more than Google, I hate Anthropic more than OpenAI.

28.02.2026 02:59 👍 200 🔁 32 💬 4 📌 3

I have nothing good to say about Anthropic just like I have nothing good to say about Muskrat during his spat with the orange man, or any of the orange man's associates when he turns on them. Anthropic knowingly partnered with the pentagon. Their other partner, Palantir,...

28.02.2026 02:59 👍 393 🔁 95 💬 6 📌 6
Post image

Head of CBS News (and, soon, CNN) taunting Zohran and cheerleading Israel/Trump’s war on Iran almost nobody wants

In case there’s still any question about the agenda behind these networks now

28.02.2026 20:55 👍 1489 🔁 491 💬 118 📌 27
Post image Post image

Sam Altman says OpenAI has signed a deal with the DoD. Worded to sound different but on my first reading, “reflects them in law and policy” isn’t different to saying “any lawful use”

28.02.2026 03:08 👍 193 🔁 34 💬 31 📌 40

Great news, they’ve figured out how to make AI be even more cartoonishly racist and discriminatory:

27.02.2026 19:50 👍 53 🔁 18 💬 2 📌 0
Preview
Actually, the left is winning the AI debate But it does need to get organized.

"Rejecting or resisting a commercial technology designed to attempt a mass wealth transfer and to erode public institutions is a valid political position." - @bcmerchant.bsky.social

26.02.2026 01:44 👍 194 🔁 69 💬 0 📌 1

I would like to thank Companion AI for helping promote my latest in @mcsweeneys.net. I had worried my satire was too dark. This Einstein product reassures me that it was optimistic sunshine.

www.mcsweeneys.net/articles/the...

23.02.2026 23:52 👍 55 🔁 22 💬 1 📌 1

There are so many kinds of AI-washing happening right now:

* Companies claiming they're using AI to try and boost their valuations
* CEOs saying AI is why they're firing people
* Traders who have long known the market is over-valued using AI as an excuse to finally sell

23.02.2026 20:57 👍 53 🔁 20 💬 2 📌 0
Preview
Great job, Internet!: Hollywood uncooked as viral Cruise vs. Pitt video looks like another AI con-job Great job, Internet!: Hollywood uncooked as viral Cruise vs. Pitt video looks like another AI con-job

The lesson from the "Tom Cruise/Brad Pitt AI Video" reveal is twofold, old, and evergreen:

1) Slow Down
and 2) always start from the ground position that any claims made by an "AI" corporation— any for-profit corporation, really— is them lying to you to hype their product.

20.02.2026 13:59 👍 535 🔁 158 💬 4 📌 8
Preview
Sam Altman’s anti-human worldview OpenAI CEO downgrades humanity in pursuit of goal to merge with computers

OpenAI is a menace. Two recent stories make that clearer than ever.

One day you have Sam Altman denigrating humanity to defend AI. The next, WSJ reveals OpenAI could have alerted Canadian police of a potential mass shooter, but refused pressure from employees. That person went on to kill 8 people.

23.02.2026 21:44 👍 1656 🔁 622 💬 18 📌 74
Preview
Advising Fellow, College of Arts & Sciences in Charlottesville, Virginia, United States of America | Student Services, Health, & Wellness at University of Virginia Apply for Advising Fellow, College of Arts & Sciences job with University of Virginia in Charlottesville, Virginia, United States of America. Student Services, Health, & Wellness at University of Virg...

Friends, I am hiring two scholar-advisors this year! Come join the best advising team in the country (for real). Please help me spread the news about this great opportunity! This is a permanent, non-time limited instructional staff position, and I will be happy to answer questions!

23.02.2026 21:59 👍 38 🔁 39 💬 1 📌 8

Maybe we shouldn't put the AI in email

23.02.2026 20:45 👍 7 🔁 3 💬 0 📌 0
To: Jeffrey epstein[jeevacation©gmail.com]
From: roger schank
Sent: Mon 1/4/2010 12:15:13 PM
Subject: there is a simpler explanation about women and intelligence
intelligence comes about in part from real focus (goal-directed
(this is why you have the absent minded professor caricature)
it is a rare woman who is not first and foremost focussed on what
thinking and feeling about her
hard to be brilliant if you are worrying if you look fat or why
hates you or why you dont own a kelly bag
roger schank
http://www.rogerschank.com/

To: Jeffrey epstein[jeevacation©gmail.com] From: roger schank Sent: Mon 1/4/2010 12:15:13 PM Subject: there is a simpler explanation about women and intelligence intelligence comes about in part from real focus (goal-directed (this is why you have the absent minded professor caricature) it is a rare woman who is not first and foremost focussed on what thinking and feeling about her hard to be brilliant if you are worrying if you look fat or why hates you or why you dont own a kelly bag roger schank http://www.rogerschank.com/

Relevant to today's conversation about AI's inherent sexism, here's an email from cognitive psychologist and early AI theorist Roger Schank, arguing to Epstein that women can't be truly intelligent, because they care too much about what other people think.

23.02.2026 16:34 👍 3369 🔁 1204 💬 184 📌 304