Iris van Rooij πŸ’­'s Avatar

Iris van Rooij πŸ’­

@irisvanrooij

Professor of Computational Cognitive Science | @AI_Radboud | @Iris@scholar.social on 🦣 | http://cognitionandintractability.com | she/they πŸ³οΈβ€πŸŒˆ

17,600
Followers
1,253
Following
2,586
Posts
29.05.2023
Joined
Posts Following

Latest posts by Iris van Rooij πŸ’­ @irisvanrooij

Box 1 β€” Implications of intractability

Box 1 β€” Implications of intractability

I had a realisation

Context: In our Reclaiming AI paper we argued that AI systems cannot scale up to human-level cognition without consuming astronomical amounts of resources

My realisation: The AI industry is determined to burn through the earth’s resources just to prove us right *empirically*

25.02.2026 21:48 πŸ‘ 106 πŸ” 33 πŸ’¬ 6 πŸ“Œ 2

I have ADHD and I'm pancreaticly impaired and I do not want people trying to forward those technofash slop generators on my behalf

04.03.2026 18:29 πŸ‘ 25 πŸ” 3 πŸ’¬ 0 πŸ“Œ 1
Ad for a session. There is a black and teal gradient in the back. It reads
AWP 2026 Conference
How to Resist AI in Writing & Teaching

Then three images:
A femme with dark hair wearing a black blazer and a dark blouse, looking towards the camera, with an arm up on a table. Below it reads, Carmen Maria Machado.

A black and white picture of a man with dark skin, slightly long black hair, and dark stubble. Below it reads, Umair Kazi.

A brown trans woman with slightly longer black curly hair, wearing a black sweater, with her arms crossed. She is standing in front of a brick wall. Below it reads, Dr. Alex Hanna.

A fourth picture is on the right: a woman with brown skin, shoulder-length black hair, is smiling and looking at the camera. She is wearing a chunky necklace and a black t-shirt. Below it reads, Moderated by Vauhini Vara.

Below the images, it reads:
Thursday, March 5, 12:10 PM. Room 310. Sponsored by The Author's Guild. The Author's Guild is represented by its logo.

Ad for a session. There is a black and teal gradient in the back. It reads AWP 2026 Conference How to Resist AI in Writing & Teaching Then three images: A femme with dark hair wearing a black blazer and a dark blouse, looking towards the camera, with an arm up on a table. Below it reads, Carmen Maria Machado. A black and white picture of a man with dark skin, slightly long black hair, and dark stubble. Below it reads, Umair Kazi. A brown trans woman with slightly longer black curly hair, wearing a black sweater, with her arms crossed. She is standing in front of a brick wall. Below it reads, Dr. Alex Hanna. A fourth picture is on the right: a woman with brown skin, shoulder-length black hair, is smiling and looking at the camera. She is wearing a chunky necklace and a black t-shirt. Below it reads, Moderated by Vauhini Vara. Below the images, it reads: Thursday, March 5, 12:10 PM. Room 310. Sponsored by The Author's Guild. The Author's Guild is represented by its logo.

Thursday @ AWP 2026! Join Carmen Maria Machado, Umair Kazi, @vauhinivara.bsky.social, and myself as we discuss how to Resist AI in Writing and Teaching. Sponsored by @authorsguild.org.

12:10 PM in Room 310. See you in Baltimore!

authorsguild.org/event/awp-20...

04.03.2026 23:04 πŸ‘ 96 πŸ” 31 πŸ’¬ 4 πŸ“Œ 0
A weathered leather bound book with a design devised in gold reading β€œAre we a stupid people?” With a large gold question mark and a lion holding a flag and a heraldic shield underneath. The bottom reads β€œBy One of Them”

A weathered leather bound book with a design devised in gold reading β€œAre we a stupid people?” With a large gold question mark and a lion holding a flag and a heraldic shield underneath. The bottom reads β€œBy One of Them”

Today’s research deep dive brought me this leather bound book cover

05.03.2026 07:26 πŸ‘ 36 πŸ” 6 πŸ’¬ 1 πŸ“Œ 2

And doctors used to prescribe cigarettes or whatever? Who cares? The tide goes in and out, evil genies get stuffed back into the bottle, and mathematical and ethical truth bends my way FYI 😌

olivia.science/before

05.03.2026 06:22 πŸ‘ 37 πŸ” 9 πŸ’¬ 1 πŸ“Œ 2

Also because guard rails are a scam. Sadly.

05.03.2026 06:02 πŸ‘ 60 πŸ” 9 πŸ’¬ 2 πŸ“Œ 0
Preview
Reclaiming AI as a Theoretical Tool for Cognitive Science - Computational Brain & Behavior The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive scien...

The answer is probably either in this paper:

doi.org/10.1007/s421...

Or this:

philsci-archive.pitt.edu/25289

Or both! If you search the doi link on here you'll find threads by me and @irisvanrooij.bsky.social on these two, but I suspect the papers offer the detail you want on what we think!

05.03.2026 04:48 πŸ‘ 7 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0

πŸ‘€

🫩

Just normal stuff

bsky.app/profile/geom...

4/n

05.03.2026 06:14 πŸ‘ 24 πŸ” 8 πŸ’¬ 1 πŸ“Œ 0

Also the playbook between tobacco and AI as well as petroleum is basically shared...

bsky.app/profile/oliv...

olivia.science/before

3/n

05.03.2026 06:12 πŸ‘ 24 πŸ” 8 πŸ’¬ 2 πŸ“Œ 0

Long story short on relevant parts: tobacco industry jumped on "stress" to divert from cigs cause cancer, much like AI companies will inevitably do the same for psychosis or wtv to divert from the fact that their bots cause harm. No user is causing this.

& importantly: bsky.app/profile/oliv...

2/

05.03.2026 06:09 πŸ‘ 41 πŸ” 8 πŸ’¬ 1 πŸ“Œ 1

Inevitably they will blame psychosis. And we've seen this before with companies and academics claiming lung cancer is caused by stress not smoking!

Remember Hans Eysenck? www.theguardian.com/science/2019...

> This research programme has led to one of the worst scientific scandals of all time

1/n

05.03.2026 06:09 πŸ‘ 67 πŸ” 16 πŸ’¬ 3 πŸ“Œ 1
Preview
We've been here before! Parallels between AI and tobacco, and other warnings.

Bingo!

olivia.science/before

04.03.2026 23:28 πŸ‘ 20 πŸ” 2 πŸ’¬ 3 πŸ“Œ 1
Preview
Grammarly Offering Manuscript Reviews by AI Versions of Recently Deceased Professors The Grammarly "Expert Review" feature uses AI to provide feedback on papers using the name and work of real professors, dead or alive.

Daily reminder that calling ai dead labor and stolen labor is literal.

04.03.2026 22:26 πŸ‘ 327 πŸ” 128 πŸ’¬ 8 πŸ“Œ 24

Also see bsky.app/profile/oliv...

04.03.2026 06:06 πŸ‘ 4 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Might have to do a thread based on my paper with @andreaeyleen.bsky.social doi.org/10.1037/rev0... because it's just not a good argument & mathematically false. At the moment it's all explained in the paper, if anybody is interested. But the misunderstanding of proofs by professionals is sad. Sorry.

04.03.2026 05:17 πŸ‘ 12 πŸ” 2 πŸ’¬ 2 πŸ“Œ 0

These people just want to destroy academic work from research to education while pretending they understood what they want to bulldoze

bsky.app/profile/oliv...

04.03.2026 05:48 πŸ‘ 79 πŸ” 7 πŸ’¬ 1 πŸ“Œ 1

Search engines already exist and we use them.

The bot can't read the papers for you.

What exactly is the value proposition here.

03.03.2026 19:40 πŸ‘ 190 πŸ” 9 πŸ’¬ 2 πŸ“Œ 1
email to me with a title: 2027 MSc in Artificial Intelligence Application – Research Interest in Trustworthy Generative AI & Multi-Agent Safety

email body: I have been deeply inspired by your pioneering work on AI accountability, algorithmic harm governance, and ethical alignment of generative multi-modal systems. As Geoffrey Hinton has repeatedly warned the global community about the existential and structural risks of unregulated AI systems, I have long been searching for actionable, ethical frameworks to translate these high-level warnings into practical, safe AI design β€” and your research has been the definitive guide for me. In particular, your 2023 paper in Nature Machine Intelligence on the structural risks of large-scale generative models, as well as your AI Accountability Framework developed at the Mozilla Foundation, have fundamentally shaped my core belief: capable AI systems must be built on the premise of safety, transparency, and consistent alignment with human values, rather than pursuing functionality alone.

email to me with a title: 2027 MSc in Artificial Intelligence Application – Research Interest in Trustworthy Generative AI & Multi-Agent Safety email body: I have been deeply inspired by your pioneering work on AI accountability, algorithmic harm governance, and ethical alignment of generative multi-modal systems. As Geoffrey Hinton has repeatedly warned the global community about the existential and structural risks of unregulated AI systems, I have long been searching for actionable, ethical frameworks to translate these high-level warnings into practical, safe AI design β€” and your research has been the definitive guide for me. In particular, your 2023 paper in Nature Machine Intelligence on the structural risks of large-scale generative models, as well as your AI Accountability Framework developed at the Mozilla Foundation, have fundamentally shaped my core belief: capable AI systems must be built on the premise of safety, transparency, and consistent alignment with human values, rather than pursuing functionality alone.

never published in Nature Machine Intelligence & neither do i have work on "AI Accountability Framework"

i know this is now normal but i want you all to stop & reflect on how much the future is fucked & the only way to mitigate this disaster is to ban/limit this dammed technology

04.03.2026 12:11 πŸ‘ 130 πŸ” 39 πŸ’¬ 8 πŸ“Œ 3

As I say at the top, the most useful message is that AI products cannot promise guardrails work because by definition, unless the internals of the system stop being the type of LLMs used, you need a human between toy and child/user. Defeating the point 100% of course!

6/n

bsky.app/profile/mari...

17.11.2025 06:11 πŸ‘ 125 πŸ” 24 πŸ’¬ 3 πŸ“Œ 1
04.03.2026 19:29 πŸ‘ 27 πŸ” 8 πŸ’¬ 0 πŸ“Œ 0

It is kind of suspicious that the only people I see actively defending LLMs as morally neutral seem to have very specific career incentives to do so. Especially in the academy!

04.03.2026 16:34 πŸ‘ 150 πŸ” 24 πŸ’¬ 5 πŸ“Œ 3
Preview
We've been here before! Parallels between AI and tobacco, and other warnings.

Yes, and if useful here's extra ammo

olivia.science/before

04.03.2026 18:33 πŸ‘ 7 πŸ” 5 πŸ’¬ 0 πŸ“Œ 0

This relates to something I've been trying to say: that the idea that "AI is fine to use for research", "fine, you just need to check the output" is a ridiculous thing when the AI can generate as much text as you have money to shove in the machine, and there's only 16 billion eyes to read it.

04.03.2026 17:25 πŸ‘ 84 πŸ” 14 πŸ’¬ 3 πŸ“Œ 1
Against the Uncritical Adoption of 'AI' Technologies in Academia Under the banner of progress, products have been uncritically adopted or even imposed on users β€” in past centuries with tobacco and combustion engines, and in the 21st with social media. For these col...

problematic aspects of LLMs? Not for me.

Two links that might enlighten Donald:

- Against the Uncritical Adoption of 'AI' Technologies in Academia zenodo.org/records/1706... - The AI Con thecon.ai

04.03.2026 12:22 πŸ‘ 12 πŸ” 4 πŸ’¬ 1 πŸ“Œ 0
Against the Uncritical Adoption of 'AI' Technologies in Academia Under the banner of progress, products have been uncritically adopted or even imposed on users β€” in past centuries with tobacco and combustion engines, and in the 21st with social media. For these col...

I commented here (will Donald Knuth read?):

news.ycombinator.com/item?id=4723...

I think it's wrong to criticise LLMs with β€˜it can't do that’ (from what I understood from the first paragraph, this was Donald's criticism).
If it can, does it make a difference in relation to all the other +++πŸ‘‡

04.03.2026 12:22 πŸ‘ 15 πŸ” 2 πŸ’¬ 1 πŸ“Œ 1

We are smarter, in that respect, than Donald Knuth bsky.app/profile/adol...

04.03.2026 17:33 πŸ‘ 10 πŸ” 4 πŸ’¬ 0 πŸ“Œ 1

i think enthusiastic LLM use is mostly a stack of cognitive biases, unacknowledged plagiarism, and unmet needs in a trenchcoat

but also my main objections aren't about them being bad at tasks so i don't care if you think they've gotten better at it

04.03.2026 16:53 πŸ‘ 454 πŸ” 134 πŸ’¬ 8 πŸ“Œ 3
Getting Past Past-Tense
[ANNs] are not perfect: they are not really explainable, they are not
pliable, i.e., they cannot be easily modified to correct any errors
observed, and they are not efficient due to the overhead of decoding. In
contrast, rule-based methods are more transparent to subject matter
experts; they are amenable to having a human in the loop through
intervention, manipulation and incorporation of domain knowledge;
and further the resulting systems tend to be lightweight and fast.
(Chiticariu et al. 2023, p. iii)
In what is known in the literature as the past-tense debate (e.g.,
Elman et al., 1996; Pinker & Ullman, 2002), cognition and its
underpinning substrates were discussed in terms of whether hard-
wired capacities, such as grammatical rules for English past-tense
formation, are encoded in the genes or otherwise without learning.
Furthermore, claims were made about connectionist systems, such
as, ANN β€œmodels cannot deal with languages such as Hebrew,
where regular and irregular nouns are intermingled in the same
phonological neighborhoods” (Pinker & Ullman, 2002, p. 459).
While it may have been true for models at the time that certain data
sets were unlearnable, or specific nondeep ANNs had limited
learning abilities due to their architecture or training set or regimen,
this both does not hold in the present day for certain data sets
(discussed below) and continues to hold in the sense that there are
data sets that are inaccessible to modeling endeavors using ANNs
(see proof in van Rooij et al., 2024). Work such as Zhang et al.
(2016, 2017) can serve to neutralize the claim that ANNs might
struggle with certain unstructured data sets, for example, β€œwhere
regular and irregular nouns are intermingled” (Pinker & Ullman,
2002, p. 459), by demonstrating that ANNs can learn utterly random
mappings between inputs and outputs. Of course, such a finding
about ANNs is also problematic to C-connectionists, who propose
that in many cases similar input–output…

Getting Past Past-Tense [ANNs] are not perfect: they are not really explainable, they are not pliable, i.e., they cannot be easily modified to correct any errors observed, and they are not efficient due to the overhead of decoding. In contrast, rule-based methods are more transparent to subject matter experts; they are amenable to having a human in the loop through intervention, manipulation and incorporation of domain knowledge; and further the resulting systems tend to be lightweight and fast. (Chiticariu et al. 2023, p. iii) In what is known in the literature as the past-tense debate (e.g., Elman et al., 1996; Pinker & Ullman, 2002), cognition and its underpinning substrates were discussed in terms of whether hard- wired capacities, such as grammatical rules for English past-tense formation, are encoded in the genes or otherwise without learning. Furthermore, claims were made about connectionist systems, such as, ANN β€œmodels cannot deal with languages such as Hebrew, where regular and irregular nouns are intermingled in the same phonological neighborhoods” (Pinker & Ullman, 2002, p. 459). While it may have been true for models at the time that certain data sets were unlearnable, or specific nondeep ANNs had limited learning abilities due to their architecture or training set or regimen, this both does not hold in the present day for certain data sets (discussed below) and continues to hold in the sense that there are data sets that are inaccessible to modeling endeavors using ANNs (see proof in van Rooij et al., 2024). Work such as Zhang et al. (2016, 2017) can serve to neutralize the claim that ANNs might struggle with certain unstructured data sets, for example, β€œwhere regular and irregular nouns are intermingled” (Pinker & Ullman, 2002, p. 459), by demonstrating that ANNs can learn utterly random mappings between inputs and outputs. Of course, such a finding about ANNs is also problematic to C-connectionists, who propose that in many cases similar input–output…

The relevant section is here on page 10 "getting past past-tense" see pdf here and it's not that long, but longer than extract below: olivia.science/doc/GuestMar...

Guest, O. & Martin, A. E. (2025). A Metatheory of Classical and Modern Connectionism. Psychological Review. doi.org/10.1037/rev0...

04.03.2026 06:05 πŸ‘ 7 πŸ” 2 πŸ’¬ 1 πŸ“Œ 1

cool pincer movement if you truly grasp:

AI & any concept relating to it like so-called guardrails are a scam in the deepest sense like a perpetual motion machine or a quija board β€” and not only a scam like a pyramid scheme which is a possible way to make money if you are first in first out

🧡

1/n

17.11.2025 05:51 πŸ‘ 376 πŸ” 134 πŸ’¬ 11 πŸ“Œ 32
Preview
Critical AI Literacies for Resisting and Reclaiming | Radboud University This course is designed to foster critical AI literacies in participants to empower them to develop ways of resisting or reclaiming AI in their own practices and social context.

πŸ“š β˜€οΈ

Deadline for early bird fee is March 31. You can apply here: www.ru.nl/en/education... We have limited space, and will select based on motivation. We especially encourage women and/or minoritized people to apply.

4/🧡

19.02.2026 21:27 πŸ‘ 9 πŸ” 4 πŸ’¬ 1 πŸ“Œ 2