Iโll be there - all going to plan!
Iโll be there - all going to plan!
Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users โ in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industryโs marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.
Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAIโs ChatGPT and Appleโs Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).
Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.
Protecting the Ecosystem of Human Knowledge: Five Principles
Finally! ๐คฉ Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...
We unpick the tech industryโs marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
AI presents a fundamental threat to our ability to use polls to assess public opinion. Bad actors who are able to infiltrate panels can flip close election polls for less than the cost of a Starbucks coffee. Models will also infer and confirm hypotheses in experiments. Current quality checks fail
Hard recommend for the Mystery AI Hype Theatre 3000 Podcast - although you will end up sitting with your head in your hands, maybe crying a bitโฆ
Imagine believing that using text-generating machines to perform clinical assessments & replace expert advice from real, qualified humans could improve health care.
Such machines will deskill experts & surely kill.
And you can bet the machines & their makers wonโt be held accountable.
Bureaucratic benchmarks are soul-crushing because they leave a big gap between what we care about and what can be measured.
When we forget about the things we actually care about (like making interesting discoveries) and we write worse papers to get more publications, the metric eats the value.
If your study is framed as asking whether "AI" does X as well as humans do, it's fundamentally misguided and I'd argue not scientifically sound.
A short ๐งต>>
'to treat peer review as a throughput problem is to misunderstand what is at stake. Review is not simply a production stage in the research pipeline; it is one of the few remaining spaces where the scientific community talks to itself.' 1/3
I hate this. I hate that scholars and teachers are supposed to be digital fraud experts. I hate that this part of their job description is becoming larger and larger. I hate the widening distrust. I hate a culture that aggressively devalues the curiosity and humility required for ongoing learning.
Title page with abstract: The current AI hype cycle combined with Psychologyโs various crises make for a perfect storm. Psychology, on the one hand, has a history of weak theoretical foundations, a neglect for computational and formal skills, and a hyperempiricist privileging of experimental tasks and testing for effects. Artificial Intelligence, on the other hand, has a history of conflating artifacts for theories of cognition, or even minds themselves, and its engineering offspring likes to move fast and break things. Many of our contemporaries now want to combine the worst of these two worlds. What could possibly go wrong? Quite a lot. Does this mean that Psychology and Artificial Intelligence can best part ways? Not at all. There are very fruitful ways in which the two disciplines can interact and theoretically contribute to Cognitive Science, for instance, by studying the scope and limits of computational models of human cognition. But to reap the fruits one needs to understand how to steer clear of potential traps.
Will do a brief thread with highlights:
โMany of our contemporaries now want to combine the worst of these two worlds [i.e., Psychology and Artificial Intelligence].
What could possibly go wrong?
Quite a lot.โ
2/๐งต
It is EXHAUSTING not only being made responsible for coming up with new kinds of assignments for our students; it's also tedious reading op-eds that suggest the core problem is a crisis in teaching. But, as Chris and I lay out here, this isn't a crisis in teaching; it's an attack on learning.
The latest QRP (although it goes well beyond โquestionableโ and straight into the realm of junk data fraud IMHO): LLM-hacking
Good luck drawing reliable conclusions from the answers that Qualtrics' AI model provides to your survey questions... bsky.app/profile/joac...
title: Cheap science, real harm: the cost of replacing human participation with synthetic data author: Abeba Birhane abstract: Driven by the goals of augmenting diversity, increasing speed, reducing cost, the use of synthetic data as a replacement for human participants is gaining traction in AI research and product development. This talk critically examines the claim that synthetic data can โaugment diversity,โ arguing that this notion is empirically unsubstantiated, conceptually flawed, and epistemically harmful. While speed and cost-efficiency may be achievable, they often come at the expense of rigour, insight, and robust science. Drawing on research from dataset audits, model evaluations, Black feminist scholarship, and complexity science, I argue that replacing human participants with synthetic data risks producing both real-world and epistemic harms at worst and superficial knowledge and cheap science at best
I wrote this brief talk on why โaugmenting diversityโ with LLMs is empirically unsubstantiable, conceptually flawed, and epistemically harmful and a nice surprise to see the organisers have made it public
synthetic-data-workshop.github.io/papers/13.pdf
Delighted to that my grant proposal with Anita Eerland, Verbs and Eyewitness Testimony: A Multilab Registered Replication Report, has been funded by @NWO (Dutch Research Council) through OpenScience.nl. Excited to get started on the project I describe here.
rolfzwaan.substack.com/p/memory-mis...
Very interesting - and look forward to reading. Do you have any thoughts about the extent to which there might be a developmental angle to trait-like over-confidence?
๐จ Now out in Psych Science ๐จ
We report an adversarial collaboration (with @donandrewmoore.bsky.social) testing whether overconfidence is genuinely a trait
The paper was led by Jabin Binnendyk & Sophia Li (who is fantastic and on the job market!) Free copy here: journals.sagepub.com/eprint/7JIYS...
"an ever-widening gap between those who do the work and those who administer it. And an even larger gap exists between those tasked with most of the teaching and those who do most of the budgeting."
www.aaup.org/underclass-s...
#Highered #PhDchat #research #teaching #academicsky
A broken record (vinyl music album)
We know the drivers of research waste in academia are
โ ๏ธPressure to maximize papers and PhD students
โ ๏ธEndless demands on time due to poor management
โ ๏ธStakeholders don't insist on robust quality systems to underpin mission critical work
Solutions that don't address these are pointless.
โBerg's point is that AI doesn't merely automate tasks โ it automates the very processes through which people develop their skills.โ
The most precious commodity you have is your attention. You donโt have to waste it on poor-faith debates or arguments with strangers if you donโt think theyโll be productive. You can prioritize the things that matter to you and make your life richer.
I feel you, ancient Mongolian ceramic hedgehog. I feel you.
Itโs widely known (and, I think, pretty uncontroversial) that learning requires effort โ specifically, if you donโt have to work at getting the knowledge, it wonโt stick.
Even if an LLM could be trusted to give you correct information 100% of the time, it would be an inferior method of learning it.
Absolutely thisโฆthere are still many predators evading their comeuppance, including in my own field.
new paper by Sean Westwood:
With current technology, it is impossible to tell whether survey respondents are real or bots. Among other things, makes it easy for bad actors to manipulate outcomes. No good news here for the future of online-based survey research
This seems bad on like 15 different fronts
New paper by @emilyspearing.bsky.social et al. out now in the Journal of Environmental Psychology
Black Summer Arson: Examining the Impact of Climate Misinformation and Corrections on Reasoning
doi.org/10.1016/j.je...
Tweet from Jack: What trillion dollar problem is AI trying to solve? Wages. They are trying to use it to solve having to pay wages.
Evergreen.