11am/8pm (Boston time) tomorrow!
11am/8pm (Boston time) tomorrow!
Poster for "More Cultural Politics of the Computational Image," Friday Jan 23, 2026. Schedule SESSION 1. 11:00 AM – 12:30 PM EST Amitabh Vikram (Shri Mata Vaishno Devi University), “Holy Pixels: Temple CCTV, Darshan Livestreams, and Crowd Vision” Ann Wang (University of Pennsylvania), “Amphibious Tropics: Automation, Extraction, and the Imaging (Making) of Malaysia’s Geospatial Landscape” Hansun Hsiung (Durham University), “NHK Engineers Meet Media Theory: The Televisual Image in Japan’s ‘Informationalizing Society’” SESSION 2. 8:00 – 9:30 PM EST Ding-Liang Chen (National Tsing Hua University), “Prospecting Resource: Early Satellite Vision and War Ecologies in Southeast Asia” Joia Duskic (UCLA), “The Procession Does Not Proceed, It Gets Prompted from the Sidelines” Yedong Sh-Chen (Harvard University), “Under the Chinese Skin: A Digital Dermatology of Video Games” Register at https://globalmediations.mit.edu/cpci-workshop/
Join us for a follow-up conference to Cultural Politics of the Computational Image, this time online. Friday, January 23, day/night. Register for the zooms at: globalmediations.mit.edu/cpci-workshop/
New article in AI & Society with @richardjeanso.bsky.social and @hoytlong.bsky.social 🎉
We wanted to know how AI might affect cultural fields like literary publishing. But cultural production is complex! So we piloted a new method we call “social simulation.”
rdcu.be/eTkMy
maybe blogs or listservs that I don’t know about are doing this work, but that whole ecosystem seems very diffuse
I do think there may be a need for some kind of preprint server for humanities-inflected position papers—for instance, the DeepSeek-OCR release last month could have produced a lot of interesting 5-6 page reflections on word and image that wouldn’t really fit the journal format
there’s an indexical aspect to stock photos like these that’s really lacking in AI images
did the great tragedians ever consider simply instructing their readers to feel sad?
something has been wrong with Chronicling’s (or LOC’s?) rate limiter for a while, because I had similar issues earlier this year
2 weeks left on this (no idea why I put the deadline on a weekend)
that is, the cultural and economic forces that have shaped the development of AI are in many ways the same forces that have shaped the development of our notion of “the human” as such—making it a poor locus of resistance
I think in general it’s somewhat irresponsible to stake your critique of AI on some stable, essential idea of what it means to “be human” or “act human”—not only are those categories extremely flexible, they inherit heavily from the same tradition of rationalism that got us here in the first place!
(this objection is directed at Chollet, of course, not you)
What good could ARC-AGI possibly be if systems can do well on it without exhibiting generalizable intelligence??
If your response to someone achieving good performance on your benchmark is to complain that they didn’t do it the way you wanted to, you have a bad benchmark
nobody wants to drown hungry!
If I’m reading this right, Sam is basically saying outright that the GPT-5 personality changes are in response to the “ChatGPT psychosis” panic??
If so, I’m surprised that OpenAI is so rattled.
Of course LW’s conception of “forms of life” is anthropomorphic, and it’s hardly fair to expect him to have foreseen the present state of NLP, but the fact that LLMs exhibit linguistic competence *would* seem to challenge the idea that shared word-conventions must reflect agreement in forms of life.
It’s just not obvious to me that the operations of an LLM in silica do not constitute a form of life, if a very alien one. I may be misreading PI but it seems to me that any experience which imparts linguistic competence qualifies as a form of life, and that LLM training is such an experience.
PI also warns against “the temptation to invent a myth of meaning”—which so many (shallow) critiques of LLMs do by equating meaning to the favored signifiers of humanism (art, beauty, the soul, etc.)
Yes, absolutely. The work then is to specify what those ways of meaning-making are and why they are important.
in other words, I guess, is touching silicon so different from touching grass?
Wittgenstein says that “we are talking about the spatial and temporal phenomenon of language, not some non-spatial, non-temporal phantasm”—but then again, Matt’s work so effectively reminds us that computation is not as non-spatial and non-temporal as it is sometimes made to seem.
But I do think it’s worth asking where the invocation of the symbol grounding problems leads rhetorically wrt general-purpose value judgments about LLMs, and I think it’s often to the same kind of shallow, knee-jerk humanism that’s been a feature of AI discourse since time immemorial. end/
To be clear, I don’t think that you’re doing this here and I do think that the symbol grounding problem is a much more salient and well-founded characterization of LLMs than glorified autocomplete, just linear algebra, stochastic parrots, and so on. 3/
And I think the answer often involves an implicit or explicit appeal to some value-laden humanistic category (art, love, beauty, empathy, etc.) that is assumed to be accessible only to beings that use language in an embodied, grounded way. 2/
I think the issue I have with the symbol grounding problem is not its basic contention, but its framing as a “problem” rather than a descriptive account of how computers use language. If we accept that “LLMs can’t touch grass,” as I’m inclined to, the next question is so what? 1/
it would be like saying that the tiger is made out of protons
also just not true at the most basic technical level, nonlinear activations between matrix multiplications are super important for networks to learn anything useful
“it’s just matrix multiplication” is probably the worst one, and always delivered so smugly
to be fair, it’s hard to make a tasteful advertisement for brain pills