Olivia Guest · Ολίβια Γκεστ's Avatar

Olivia Guest · Ολίβια Γκεστ

@olivia.science

https://olivia.science assistant professor of computational cognitive science · she/they · cypriot/kıbrıslı/κυπραία · σὺν Ἀθηνᾷ καὶ χεῖρα κίνει

17,446
Followers
2,192
Following
7,442
Posts
02.05.2023
Joined
Posts Following

Latest posts by Olivia Guest · Ολίβια Γκεστ @olivia.science

Preview
Grammarly is using our identities without permission Grammarly’s AI stole my boss’s identity.

Lawsuits when.

06.03.2026 21:26 👍 151 🔁 52 💬 4 📌 9
Haley Hubbard, a lawyer representing Heindel, said in a written statement that the open records law “allows citizens the right to know how much of a limited natural resource the county has committed to supply through an agreement that could last up to 70 years with one the world’s largest companies.”
“It is disappointing that Dorchester County yielded to Google’s demands to keep the public in the dark about a long-term commitment of a precious public resource, especially since Google has conceded in similar litigation in Oregon that its trade secret argument has little merit,” he said.
The Oregon case involved similar trade secret assertions that The Dalles — a small city east of Portland — made when a newspaper asked how much water Google was consuming at a data center in the area.
In a settlement reached in December 2022, The Dalles agreed to provide 10 years of water-use figures and to continue to disclose them going forward.

Haley Hubbard, a lawyer representing Heindel, said in a written statement that the open records law “allows citizens the right to know how much of a limited natural resource the county has committed to supply through an agreement that could last up to 70 years with one the world’s largest companies.” “It is disappointing that Dorchester County yielded to Google’s demands to keep the public in the dark about a long-term commitment of a precious public resource, especially since Google has conceded in similar litigation in Oregon that its trade secret argument has little merit,” he said. The Oregon case involved similar trade secret assertions that The Dalles — a small city east of Portland — made when a newspaper asked how much water Google was consuming at a data center in the area. In a settlement reached in December 2022, The Dalles agreed to provide 10 years of water-use figures and to continue to disclose them going forward.

Yeah this has been going on for years.

www.postandcourier.com/business/dor...

06.03.2026 19:09 👍 3 🔁 1 💬 0 📌 0

I have only just recently started digging into data centres and water, and the way companies are hiding vital information is beyond belief. It is so, so dodgy.

The entire project to widely popularise the idea these concerns are "fake" makes so much sense in this context

06.03.2026 12:22 👍 286 🔁 124 💬 3 📌 12
Video thumbnail

A big thank you to the @sustainableai.bsky.social lab for having me on to talk about my great big new report!!

Full video here: www.youtube.com/watch?v=PGIY...

And a clip below, about the under-discussed tension within climate / energy spaces on being pro or anti AI -->>>>

06.03.2026 14:26 👍 16 🔁 4 💬 1 📌 0

Also the playbook between tobacco and AI as well as petroleum is basically shared...

bsky.app/profile/oliv...

olivia.science/before

3/n

05.03.2026 06:12 👍 24 🔁 8 💬 2 📌 0

Long story short on relevant parts: tobacco industry jumped on "stress" to divert from cigs cause cancer, much like AI companies will inevitably do the same for psychosis or wtv to divert from the fact that their bots cause harm. No user is causing this.

& importantly: bsky.app/profile/oliv...

2/

05.03.2026 06:09 👍 41 🔁 8 💬 1 📌 1

Inevitably they will blame psychosis. And we've seen this before with companies and academics claiming lung cancer is caused by stress not smoking!

Remember Hans Eysenck? www.theguardian.com/science/2019...

> This research programme has led to one of the worst scientific scandals of all time

1/n

05.03.2026 06:09 👍 67 🔁 16 💬 3 📌 1

Old talk I thought I'd make public also with @samhforbes.bsky.social iirc

06.03.2026 19:47 👍 1 🔁 0 💬 0 📌 0

We have shirts!

store.dair-institure.org

05.03.2026 20:06 👍 87 🔁 11 💬 4 📌 2
Preview
The DAIR Store The Distributed AI Research Institute Store

store.dair-institute.org

I swear I can type

05.03.2026 20:27 👍 12 🔁 1 💬 0 📌 0

You all are glowing 🌞🪻🌈

06.03.2026 19:33 👍 3 🔁 0 💬 1 📌 0

Prob but my brain is mush with migraine now haha so to even start these thoughts will kill me off 🫠

06.03.2026 17:11 👍 0 🔁 0 💬 0 📌 0
Preview
Google is powering a new US military AI platform “The future of American warfare is here, and it’s spelled A-I.”

While other AI companies are still struggling to monetize #genAI, Google has found an obvious solution: Sell it to the military. Now, U.S. soldiers can unleash the power of AI for tasks such as «summarizing policy handbooks,» a task that will certainly prove vital on the battlefields of the future

10.12.2025 08:17 👍 106 🔁 34 💬 17 📌 9
Preview
Olivia Guest Hi! I am an Assistant Professor of Computational Cognitive Science. I work in the Department of Cognitive Science and Artificial Intelligence in the Donders Centre for Cognition and the School of Arti...

PS:

1) For all pdfs see the little icon next to each paper here: olivia.science#publications

2) Some have video recordings (see relevant icon) too!

And 3) if you need anything specific to AI see: olivia.science/ai

27.01.2026 06:25 👍 7 🔁 1 💬 0 📌 0

OK! I collected much of what I @spookyachu.bsky.social @andreaeyleen.bsky.social (and other collaborators not on here) have said on the Turing test (from critical, gendered, etc. angles) as it keeps being relevant: olivia.science/turing — hope it's useful for others too. Happy Sunday! 🤖💭

15.02.2026 13:19 👍 77 🔁 33 💬 8 📌 7

I compiled a sort of follow up here olivia.science/before/ on the situation with ppl thinking we're in some special moment:

"We have certainly been here before. Many many times in the past, companies — just like artificial intelligence (AI) companies now — have lied to us to sell us products."

1/

18.02.2026 14:44 👍 42 🔁 17 💬 3 📌 1
	Pygmalion Lens	
1)	Feminised form: Is the AI, by its (default or exclusive) external characteristics, portraying a hegemonically feminine character?	Yes/No
2)	Whitened form: Is the AI, by its (default or exclusive) external characteristics, portraying a character that is inherently white (supremacist), Western, Eurocentric, etc.?	Yes/No
3)	Dislocation from work: Does the AI displace women from a role or occupation, or people in general from a role or occupation that tends to be (coded as) women's work?	Yes/No
4)	Humanisation via feminisation: Are the AI's claims to intelligence, human-likeness or personhood contingent on stereotypical feminine traits or behaviours?	Yes/No
5)	Competition with women: Is the AI pit (rhetorically or otherwise) against women in ways that favour it, and which are harmful to women?	Yes/No
6)	Diminishment via false equivalence: Does the AI facilitate a rhetoric that deems women as not having full intellectual abilities, or as otherwise less deserving of personhood?	Yes/No
7)	Obfuscation of diversity: Does the AI, through displacement of specific groups of people, “neutralise” (i.e., whiten, masculinise) a role, vocation, or skill?	Yes/No
8)	Robot rights: Do the users and/or creators of the AI grant it (aspects of) legal personhood or human(-like) rights?	Yes/No
9)	Social bonding: Do the users and/or creators of the AI develop interpersonal-like relationships with it?	Yes/No
10)	Psychological service: Does the AI function to subserve and enhance the egos of its creators and/or users?	Yes/No

Pygmalion Lens 1) Feminised form: Is the AI, by its (default or exclusive) external characteristics, portraying a hegemonically feminine character? Yes/No 2) Whitened form: Is the AI, by its (default or exclusive) external characteristics, portraying a character that is inherently white (supremacist), Western, Eurocentric, etc.? Yes/No 3) Dislocation from work: Does the AI displace women from a role or occupation, or people in general from a role or occupation that tends to be (coded as) women's work? Yes/No 4) Humanisation via feminisation: Are the AI's claims to intelligence, human-likeness or personhood contingent on stereotypical feminine traits or behaviours? Yes/No 5) Competition with women: Is the AI pit (rhetorically or otherwise) against women in ways that favour it, and which are harmful to women? Yes/No 6) Diminishment via false equivalence: Does the AI facilitate a rhetoric that deems women as not having full intellectual abilities, or as otherwise less deserving of personhood? Yes/No 7) Obfuscation of diversity: Does the AI, through displacement of specific groups of people, “neutralise” (i.e., whiten, masculinise) a role, vocation, or skill? Yes/No 8) Robot rights: Do the users and/or creators of the AI grant it (aspects of) legal personhood or human(-like) rights? Yes/No 9) Social bonding: Do the users and/or creators of the AI develop interpersonal-like relationships with it? Yes/No 10) Psychological service: Does the AI function to subserve and enhance the egos of its creators and/or users? Yes/No

overview if useful! olivia.science/ai/#pygmalion

24.02.2026 12:01 👍 8 🔁 5 💬 0 📌 1
Preview
We've been here before! Parallels between AI and tobacco, and other warnings.

Yes, Sarah!

"We have certainly been here before. Many many times in the past, companies — just like artificial intelligence (AI) companies now — have lied to us to sell us products."
olivia.science/before/

25.02.2026 06:07 👍 14 🔁 5 💬 1 📌 0
Preview
Critical AI On this page are some resources for Critical AI Literacy (CAIL) from my perspective.

if useful olivia.science/ai/ — thanks for supporting us!

25.02.2026 15:19 👍 4 🔁 1 💬 0 📌 1

perplexity doing this also manages to erase human women computers yet again LMAO what fucking clowns

if you need to see what and why, see: olivia.science/ai/#pygmalion; full pdf here: doi.org/10.31235/osf...

26.02.2026 16:47 👍 24 🔁 9 💬 0 📌 0
Getting Past Past-Tense
[ANNs] are not perfect: they are not really explainable, they are not
pliable, i.e., they cannot be easily modified to correct any errors
observed, and they are not efficient due to the overhead of decoding. In
contrast, rule-based methods are more transparent to subject matter
experts; they are amenable to having a human in the loop through
intervention, manipulation and incorporation of domain knowledge;
and further the resulting systems tend to be lightweight and fast.
(Chiticariu et al. 2023, p. iii)
In what is known in the literature as the past-tense debate (e.g.,
Elman et al., 1996; Pinker & Ullman, 2002), cognition and its
underpinning substrates were discussed in terms of whether hard-
wired capacities, such as grammatical rules for English past-tense
formation, are encoded in the genes or otherwise without learning.
Furthermore, claims were made about connectionist systems, such
as, ANN “models cannot deal with languages such as Hebrew,
where regular and irregular nouns are intermingled in the same
phonological neighborhoods” (Pinker & Ullman, 2002, p. 459).
While it may have been true for models at the time that certain data
sets were unlearnable, or specific nondeep ANNs had limited
learning abilities due to their architecture or training set or regimen,
this both does not hold in the present day for certain data sets
(discussed below) and continues to hold in the sense that there are
data sets that are inaccessible to modeling endeavors using ANNs
(see proof in van Rooij et al., 2024). Work such as Zhang et al.
(2016, 2017) can serve to neutralize the claim that ANNs might
struggle with certain unstructured data sets, for example, “where
regular and irregular nouns are intermingled” (Pinker & Ullman,
2002, p. 459), by demonstrating that ANNs can learn utterly random
mappings between inputs and outputs. Of course, such a finding
about ANNs is also problematic to C-connectionists, who propose
that in many cases similar input–output…

Getting Past Past-Tense [ANNs] are not perfect: they are not really explainable, they are not pliable, i.e., they cannot be easily modified to correct any errors observed, and they are not efficient due to the overhead of decoding. In contrast, rule-based methods are more transparent to subject matter experts; they are amenable to having a human in the loop through intervention, manipulation and incorporation of domain knowledge; and further the resulting systems tend to be lightweight and fast. (Chiticariu et al. 2023, p. iii) In what is known in the literature as the past-tense debate (e.g., Elman et al., 1996; Pinker & Ullman, 2002), cognition and its underpinning substrates were discussed in terms of whether hard- wired capacities, such as grammatical rules for English past-tense formation, are encoded in the genes or otherwise without learning. Furthermore, claims were made about connectionist systems, such as, ANN “models cannot deal with languages such as Hebrew, where regular and irregular nouns are intermingled in the same phonological neighborhoods” (Pinker & Ullman, 2002, p. 459). While it may have been true for models at the time that certain data sets were unlearnable, or specific nondeep ANNs had limited learning abilities due to their architecture or training set or regimen, this both does not hold in the present day for certain data sets (discussed below) and continues to hold in the sense that there are data sets that are inaccessible to modeling endeavors using ANNs (see proof in van Rooij et al., 2024). Work such as Zhang et al. (2016, 2017) can serve to neutralize the claim that ANNs might struggle with certain unstructured data sets, for example, “where regular and irregular nouns are intermingled” (Pinker & Ullman, 2002, p. 459), by demonstrating that ANNs can learn utterly random mappings between inputs and outputs. Of course, such a finding about ANNs is also problematic to C-connectionists, who propose that in many cases similar input–output…

The relevant section is here on page 10 "getting past past-tense" see pdf here and it's not that long, but longer than extract below: olivia.science/doc/GuestMar...

Guest, O. & Martin, A. E. (2025). A Metatheory of Classical and Modern Connectionism. Psychological Review. doi.org/10.1037/rev0...

04.03.2026 06:05 👍 7 🔁 2 💬 1 📌 1
Preview
We've been here before! Parallels between AI and tobacco, and other warnings.

Yes! Source is also why tobacco companies hide their continued (so-called) scientific contributions to cancer research and source is why their research is banned in journals

"The AI and generally the technology industry is no different. Why should they be?"

olivia.science/before/

26.02.2026 18:07 👍 21 🔁 7 💬 0 📌 0
licensing

Yup!

> I forbid use of my words, code, or images for “training” their artificial neural network models. Such actions are in violation of the licenses outlined above as they require correct attribution.

olivia.science/license/

04.03.2026 23:09 👍 10 🔁 1 💬 0 📌 0

The saddest part is, we never properly dealt with tobacco and petroleum which is why we cannot with AI either...

olivia.science/before

bsky.app/profile/oliv...

02.03.2026 04:46 👍 45 🔁 10 💬 1 📌 0
Preview
Theory What is a cognitive scientific theory?

some resources for cognitive scientists — especially for junior scholars who ask me wonderful questions and want to learn more — on theorising and metatheorising olivia.science/theory/ (not 100% finished & more to come, but all my work is freely available here: olivia.science#publications as usual)

01.03.2026 15:13 👍 74 🔁 32 💬 1 📌 3

Nice timing for @mjcrockett.bsky.social and my article on AI Surrogates and Illusions of Generalizability to be officially published. www.cell.com/trends/cogni...

06.03.2026 14:22 👍 15 🔁 8 💬 0 📌 0
Preview
Americas First War in Age of LLMs Exposes Myth of AI Alignment The military is turning to tools that relieve the burden of conscience and function like a moral sedative, writes Eryk Salvaggio.

The Trump administration’s escalating campaign in Iran marks the beginning of America’s first war in the age of large language models. These events make clear that those who work on AI safety must confront the limits of so-called “alignment to human values,” writes Eryk Salvaggio.

06.03.2026 14:31 👍 12 🔁 2 💬 0 📌 1
Preview
California colleges spend millions on faulty AI chatbots Community colleges are spending millions on AI chatbots that students say often give inaccurate answers. Many might see upgrades soon.

“In testing by CalMatters, they [the chatbots] often answered general questions correctly but struggled with more specific ones. East Los Angeles College’s bot couldn’t even correctly name its own president.”

06.03.2026 14:56 👍 45 🔁 11 💬 4 📌 5

I’m especially annoyed at @luxottica.bsky.social who made the RayBan frames for Meta glasses impossible to distinguish from regular glasses. Glad they’re being sued for this.

06.03.2026 15:22 👍 116 🔁 42 💬 5 📌 2

the harder the industry invests on pushing narratives of AI as (only) positive, inevitable, and inherently good for society/business, the more any criticism of this narrative becomes “too radical”, “unworkable” and “unrealistic”

06.03.2026 15:40 👍 160 🔁 34 💬 0 📌 6