Lorraine Hope's Avatar

Lorraine Hope

@lorrainehope

Professor of Applied Cognitive Psychology at University of Portsmouth, UK. Special interest in memory performance and memory elicitation techniques. Views own.

828
Followers
551
Following
36
Posts
03.10.2023
Joined
Posts Following

Latest posts by Lorraine Hope @lorrainehope

Iโ€™ll be there - all going to plan!

05.03.2026 23:12 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users โ€” in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industryโ€™s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users โ€” in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industryโ€™s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAIโ€™s ChatGPT and
Appleโ€™s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAIโ€™s ChatGPT and Appleโ€™s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! ๐Ÿคฉ Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industryโ€™s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

06.09.2025 08:13 ๐Ÿ‘ 3787 ๐Ÿ” 1897 ๐Ÿ’ฌ 110 ๐Ÿ“Œ 390
Post image

AI presents a fundamental threat to our ability to use polls to assess public opinion. Bad actors who are able to infiltrate panels can flip close election polls for less than the cost of a Starbucks coffee. Models will also infer and confirm hypotheses in experiments. Current quality checks fail

18.11.2025 21:23 ๐Ÿ‘ 211 ๐Ÿ” 97 ๐Ÿ’ฌ 4 ๐Ÿ“Œ 26

Hard recommend for the Mystery AI Hype Theatre 3000 Podcast - although you will end up sitting with your head in your hands, maybe crying a bitโ€ฆ

13.02.2026 16:39 ๐Ÿ‘ 7 ๐Ÿ” 3 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Imagine believing that using text-generating machines to perform clinical assessments & replace expert advice from real, qualified humans could improve health care.

Such machines will deskill experts & surely kill.

And you can bet the machines & their makers wonโ€™t be held accountable.

13.02.2026 15:50 ๐Ÿ‘ 15 ๐Ÿ” 8 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Bureaucratic benchmarks are soul-crushing because they leave a big gap between what we care about and what can be measured.

When we forget about the things we actually care about (like making interesting discoveries) and we write worse papers to get more publications, the metric eats the value.

12.02.2026 16:05 ๐Ÿ‘ 11 ๐Ÿ” 3 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

If your study is framed as asking whether "AI" does X as well as humans do, it's fundamentally misguided and I'd argue not scientifically sound.

A short ๐Ÿงต>>

09.02.2026 14:15 ๐Ÿ‘ 200 ๐Ÿ” 54 ๐Ÿ’ฌ 5 ๐Ÿ“Œ 5
Preview
AI is not a peer, so it canโ€™t do peer review If we still believe thatย science is a vocationย grounded in argument, curiosity and care, we canโ€™t delegate judgement to machines, saysย Akhil Bhardwaj

'to treat peer review as a throughput problem is to misunderstand what is at stake. Review is not simply a production stage in the research pipeline; it is one of the few remaining spaces where the scientific community talks to itself.' 1/3

03.02.2026 08:17 ๐Ÿ‘ 367 ๐Ÿ” 156 ๐Ÿ’ฌ 6 ๐Ÿ“Œ 20

I hate this. I hate that scholars and teachers are supposed to be digital fraud experts. I hate that this part of their job description is becoming larger and larger. I hate the widening distrust. I hate a culture that aggressively devalues the curiosity and humility required for ongoing learning.

30.01.2026 19:35 ๐Ÿ‘ 183 ๐Ÿ” 74 ๐Ÿ’ฌ 5 ๐Ÿ“Œ 0
Title page with abstract: 

The current AI hype cycle combined with Psychologyโ€™s various crises make for a perfect storm. Psychology, on the one hand, has a history of weak theoretical foundations, a neglect for computational and formal skills, and a hyperempiricist privileging of experimental tasks and testing for effects. Artificial Intelligence, on the other hand, has a history of conflating artifacts for theories of cognition, or even minds themselves, and its engineering offspring likes to move fast and break things. Many of our contemporaries now want to combine the worst of these two worlds. What could possibly go wrong? Quite a lot. Does this mean that Psychology and Artificial Intelligence can best part ways? Not at all. There are very fruitful ways in which the two disciplines can interact and theoretically contribute to Cognitive Science, for instance, by studying the scope and limits of computational models of human cognition. But to reap the fruits one needs to understand how to steer clear of potential traps.

Title page with abstract: The current AI hype cycle combined with Psychologyโ€™s various crises make for a perfect storm. Psychology, on the one hand, has a history of weak theoretical foundations, a neglect for computational and formal skills, and a hyperempiricist privileging of experimental tasks and testing for effects. Artificial Intelligence, on the other hand, has a history of conflating artifacts for theories of cognition, or even minds themselves, and its engineering offspring likes to move fast and break things. Many of our contemporaries now want to combine the worst of these two worlds. What could possibly go wrong? Quite a lot. Does this mean that Psychology and Artificial Intelligence can best part ways? Not at all. There are very fruitful ways in which the two disciplines can interact and theoretically contribute to Cognitive Science, for instance, by studying the scope and limits of computational models of human cognition. But to reap the fruits one needs to understand how to steer clear of potential traps.

Will do a brief thread with highlights:

โ€œMany of our contemporaries now want to combine the worst of these two worlds [i.e., Psychology and Artificial Intelligence].

What could possibly go wrong?

Quite a lot.โ€

2/๐Ÿงต

06.01.2026 17:49 ๐Ÿ‘ 67 ๐Ÿ” 17 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0

It is EXHAUSTING not only being made responsible for coming up with new kinds of assignments for our students; it's also tedious reading op-eds that suggest the core problem is a crisis in teaching. But, as Chris and I lay out here, this isn't a crisis in teaching; it's an attack on learning.

24.12.2025 19:39 ๐Ÿ‘ 1677 ๐Ÿ” 508 ๐Ÿ’ฌ 13 ๐Ÿ“Œ 23
18.12.2025 09:19 ๐Ÿ‘ 66 ๐Ÿ” 21 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The latest QRP (although it goes well beyond โ€˜questionableโ€™ and straight into the realm of junk data fraud IMHO): LLM-hacking

18.12.2025 09:16 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Good luck drawing reliable conclusions from the answers that Qualtrics' AI model provides to your survey questions... bsky.app/profile/joac...

16.12.2025 18:54 ๐Ÿ‘ 27 ๐Ÿ” 7 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 1
title: Cheap science, real harm: the cost of replacing human
participation with synthetic data

author: Abeba Birhane

abstract: Driven by the goals of augmenting diversity, increasing speed, reducing cost, the
use of synthetic data as a replacement for human participants is gaining traction
in AI research and product development. This talk critically examines the claim
that synthetic data can โ€œaugment diversity,โ€ arguing that this notion is empirically
unsubstantiated, conceptually flawed, and epistemically harmful. While speed and
cost-efficiency may be achievable, they often come at the expense of rigour, insight,
and robust science. Drawing on research from dataset audits, model evaluations,
Black feminist scholarship, and complexity science, I argue that replacing human
participants with synthetic data risks producing both real-world and epistemic
harms at worst and superficial knowledge and cheap science at best

title: Cheap science, real harm: the cost of replacing human participation with synthetic data author: Abeba Birhane abstract: Driven by the goals of augmenting diversity, increasing speed, reducing cost, the use of synthetic data as a replacement for human participants is gaining traction in AI research and product development. This talk critically examines the claim that synthetic data can โ€œaugment diversity,โ€ arguing that this notion is empirically unsubstantiated, conceptually flawed, and epistemically harmful. While speed and cost-efficiency may be achievable, they often come at the expense of rigour, insight, and robust science. Drawing on research from dataset audits, model evaluations, Black feminist scholarship, and complexity science, I argue that replacing human participants with synthetic data risks producing both real-world and epistemic harms at worst and superficial knowledge and cheap science at best

I wrote this brief talk on why โ€œaugmenting diversityโ€ with LLMs is empirically unsubstantiable, conceptually flawed, and epistemically harmful and a nice surprise to see the organisers have made it public

synthetic-data-workshop.github.io/papers/13.pdf

16.12.2025 10:57 ๐Ÿ‘ 827 ๐Ÿ” 260 ๐Ÿ’ฌ 20 ๐Ÿ“Œ 10
Preview
Memory, Misinformation, and the Need to Replicate Suppose you and a friend witness a car crash.

Delighted to that my grant proposal with Anita Eerland, Verbs and Eyewitness Testimony: A Multilab Registered Replication Report, has been funded by @NWO (Dutch Research Council) through OpenScience.nl. Excited to get started on the project I describe here.

rolfzwaan.substack.com/p/memory-mis...

18.12.2025 08:03 ๐Ÿ‘ 13 ๐Ÿ” 5 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Very interesting - and look forward to reading. Do you have any thoughts about the extent to which there might be a developmental angle to trait-like over-confidence?

18.12.2025 09:11 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

๐Ÿšจ Now out in Psych Science ๐Ÿšจ

We report an adversarial collaboration (with @donandrewmoore.bsky.social) testing whether overconfidence is genuinely a trait

The paper was led by Jabin Binnendyk & Sophia Li (who is fantastic and on the job market!) Free copy here: journals.sagepub.com/eprint/7JIYS...

17.12.2025 17:17 ๐Ÿ‘ 127 ๐Ÿ” 41 ๐Ÿ’ฌ 8 ๐Ÿ“Œ 6
Preview
UK to re-join Erasmus+ โ€“ here are six benefits of the European exchange scheme Erasmus+ an accessible and well-supported programme.

Erasmus+ an accessible and well-supported programme.

18.12.2025 08:46 ๐Ÿ‘ 17 ๐Ÿ” 4 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 3
Preview
The Underclass Is in Session What do we see when we view the structure of academic labor as it is, not as we wish it to be?

"an ever-widening gap between those who do the work and those who administer it. And an even larger gap exists between those tasked with most of the teaching and those who do most of the budgeting."
www.aaup.org/underclass-s...
#Highered #PhDchat #research #teaching #academicsky

13.12.2025 08:29 ๐Ÿ‘ 19 ๐Ÿ” 5 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
A broken record (vinyl music album)

A broken record (vinyl music album)

We know the drivers of research waste in academia are

โš ๏ธPressure to maximize papers and PhD students
โš ๏ธEndless demands on time due to poor management
โš ๏ธStakeholders don't insist on robust quality systems to underpin mission critical work

Solutions that don't address these are pointless.

30.09.2024 08:45 ๐Ÿ‘ 58 ๐Ÿ” 24 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0

โ€œBerg's point is that AI doesn't merely automate tasks โ€” it automates the very processes through which people develop their skills.โ€

30.11.2025 20:07 ๐Ÿ‘ 86 ๐Ÿ” 28 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 2

The most precious commodity you have is your attention. You donโ€™t have to waste it on poor-faith debates or arguments with strangers if you donโ€™t think theyโ€™ll be productive. You can prioritize the things that matter to you and make your life richer.

30.11.2025 20:00 ๐Ÿ‘ 11825 ๐Ÿ” 3005 ๐Ÿ’ฌ 131 ๐Ÿ“Œ 196

I feel you, ancient Mongolian ceramic hedgehog. I feel you.

26.11.2025 10:17 ๐Ÿ‘ 2316 ๐Ÿ” 938 ๐Ÿ’ฌ 19 ๐Ÿ“Œ 9

Itโ€™s widely known (and, I think, pretty uncontroversial) that learning requires effort โ€” specifically, if you donโ€™t have to work at getting the knowledge, it wonโ€™t stick.

Even if an LLM could be trusted to give you correct information 100% of the time, it would be an inferior method of learning it.

21.11.2025 12:49 ๐Ÿ‘ 5624 ๐Ÿ” 1587 ๐Ÿ’ฌ 88 ๐Ÿ“Œ 46

Absolutely thisโ€ฆthere are still many predators evading their comeuppance, including in my own field.

20.11.2025 13:13 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

new paper by Sean Westwood:

With current technology, it is impossible to tell whether survey respondents are real or bots. Among other things, makes it easy for bad actors to manipulate outcomes. No good news here for the future of online-based survey research

18.11.2025 19:15 ๐Ÿ‘ 777 ๐Ÿ” 390 ๐Ÿ’ฌ 41 ๐Ÿ“Œ 126

This seems bad on like 15 different fronts

18.11.2025 21:59 ๐Ÿ‘ 8 ๐Ÿ” 2 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 1
Redirecting

New paper by @emilyspearing.bsky.social et al. out now in the Journal of Environmental Psychology

Black Summer Arson: Examining the Impact of Climate Misinformation and Corrections on Reasoning

doi.org/10.1016/j.je...

17.11.2025 08:43 ๐Ÿ‘ 2 ๐Ÿ” 2 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Tweet from Jack:

What trillion dollar problem is AI trying to solve?

Wages. They are trying to use it to solve having to pay wages.

Tweet from Jack: What trillion dollar problem is AI trying to solve? Wages. They are trying to use it to solve having to pay wages.

Evergreen.

08.08.2025 19:12 ๐Ÿ‘ 501 ๐Ÿ” 158 ๐Ÿ’ฌ 3 ๐Ÿ“Œ 12