Giosuè Baggio's Avatar

Giosuè Baggio

@giosuebaggio

Cognitive scientist at NTNU · www.ntnu.edu/employees/giosue.baggio · Author of ‘Meaning in the Brain’ and ‘Neurolinguistics’ @mitpress.bsky.social‬ 🗣️🧠🤖

716
Followers
713
Following
91
Posts
22.09.2023
Joined
Posts Following

Latest posts by Giosuè Baggio @giosuebaggio

To build an electronic computer, stable states are constructed out of dynamical systems (flip-flop circuits), such that the dynamical nature of the underlying hardware can be entirely ignored: computation occurs as transitions between computational states, entirely shielded from the dynamics of electrons. Such shielding does not exist in brains. Thus, biological cognition cannot be reduced to elementary computations, supposedly implemented by neurons. Rather, computation is an elaborate form of cognition.

To build an electronic computer, stable states are constructed out of dynamical systems (flip-flop circuits), such that the dynamical nature of the underlying hardware can be entirely ignored: computation occurs as transitions between computational states, entirely shielded from the dynamics of electrons. Such shielding does not exist in brains. Thus, biological cognition cannot be reduced to elementary computations, supposedly implemented by neurons. Rather, computation is an elaborate form of cognition.

Computation is a particular kind of cognitive activity. It does not follow that cognition is entirely made of tiny computations (as cognitivism would make us believe).

press.princeton.edu/books/paperb...

24.02.2026 08:12 👍 10 🔁 3 💬 0 📌 0
PNAS Proceedings of the National Academy of Sciences (PNAS), a peer reviewed journal of the National Academy of Sciences (NAS) - an authoritative source of high-impact, original research that broadly spans...

Bots have made their way to Prolific experiments. Our lab has stopped online testing of adults entirely now for this reason - we want to know if what we study is real. Probably data collected 2-3 years ago are ok, but moving forward we just can't know. www.pnas.org/doi/10.1073/...

19.02.2026 15:14 👍 170 🔁 98 💬 6 📌 11

Neurolinguistics in Sweden (NLS) 2026 — abstracts due 20 February! 📝 Conference at Stockholm University, June 11-12 @stockholm-uni.bsky.social Keynotes: Esti Blanco-Elorrieta, Leonardo Fernandino & me — Get your submission in! 🇸🇪🧠🗣️

www.su.se/english/divi...

07.02.2026 09:59 👍 3 🔁 1 💬 0 📌 0
DIPSCO - Dipartimento di Psicologia e Scienze Cognitive - BANDO DI SELEZIONE PER IL CONFERIMENTO DI N. 1 INCARICO POST-DOC AI SENSI DELL’ART. 22 bis L. 240/2010 (Decreto 29/2026) | Lavora con noi

The Marica De Vincenzi Foundation is inviting applications for its post-doctoral fellowship! The fellowship offers up to 2 years of post-doctoral support abroad for Italian psycholinguists. Spread the word! Deadline is 3/10 - see info here: nam10.safelinks.protection.outlook.com?url=https%3A...

29.01.2026 15:02 👍 3 🔁 4 💬 1 📌 1

«Nobody (...) has claimed that DeepMind’s AlphaFold is conscious, even though, under the hood, it is rather similar to an LLM. (...) AlphaFold, which predicts protein structure rather than words, just doesn’t pull our psychological strings in the same way.» @anilseth.bsky.social @noemamag.com

18.01.2026 18:13 👍 6 🔁 0 💬 0 📌 1

A combination of semantic internalism and role-play fictionalism is a promising framework. That machines are capable of meaning is a necessary fiction, sustained by human semantic cognition—from the generation of training data to the interpretation of machine outputs. 20/20

16.01.2026 14:05 👍 2 🔁 0 💬 0 📌 0

The extent of ‘role play’ is significant: we pretend simulacra are temporary, atypical members of linguistic communities to evaluate their claims for truth and other norms, and we fill in cognitively for them. 19/20

16.01.2026 14:05 👍 2 🔁 0 💬 1 📌 0

Referential attributions to LMs depend entirely on human willingness to sustain the fiction of community membership and the continuous cognitive supplementation it requires from human interpreters. 18/20

16.01.2026 14:05 👍 1 🔁 0 💬 1 📌 0

We propose that simulacra function as atypical members of linguistic communities. Atypical because they rely on humans to do the ‘cognitive work’ behind reference and because of how they are limited by their own bounds of sense and reference. 17/20

16.01.2026 14:05 👍 1 🔁 0 💬 1 📌 0

LMs lack the mental structures that, in humans, explain referential capacities. But human cognition suffices to explain how LMs’ words are routinely interpreted as having meaning and reference. 16/20

16.01.2026 14:05 👍 1 🔁 0 💬 1 📌 0

A ‘role play’ perspective cannot endow machine outputs with referential properties: neither the LM nor the simulacra it supports occupy the sorts of deictic spaces that would make ‘us’, ‘here’, etc. pick out referents. 15/20

www.nature.com/articles/s41...

16.01.2026 14:05 👍 1 🔁 0 💬 1 📌 0

Causal histories cannot ground the reference relation and cannot explain the referential limitations of LMs. Those limitations may only be explained internalistically, by appealing to constraints on LMs’ architectures. 14/20

16.01.2026 14:05 👍 1 🔁 0 💬 1 📌 0

Key claim: *The bounds of sense and reference are not the same for humans and for (different kinds of) machines* 13/20

16.01.2026 14:05 👍 1 🔁 0 💬 1 📌 0

First-person uses of these expressions would be meaningless when generated by an LM—which raises the question whether the same expressions in other grammatical persons could then have sense and reference. 12/20

16.01.2026 14:05 👍 1 🔁 0 💬 1 📌 0

Languages also include non-deictic expressions whose meaning requires a situated speaker with specific bodily and mental characteristics: ‘remember’, ‘walk’, ‘seem’, ‘heavy’, ‘distant’… 11/20

16.01.2026 14:05 👍 1 🔁 0 💬 1 📌 0

LMs cannot occupy deictic spaces or establish the spatial, temporal, and social coordinates necessary for indexical reference. 10/20

16.01.2026 14:05 👍 1 🔁 0 💬 1 📌 0

Obstacle 2 🚧 Embodiment. Though human speakers may occasionally fail to establish reference through indexicals (‘here’, ‘now’, ‘us’ etc.), such failures in LMs are architectural and systematic. 9/20

16.01.2026 14:05 👍 1 🔁 0 💬 1 📌 0

Problem 2: An account in which learners must deploy internal resources to reconstruct meaningful linguistic units from arbitrary parts, and reconnect to usage chains only in virtue of such reconstruction, is no longer an externalist story. 8/20

16.01.2026 14:05 👍 1 🔁 0 💬 1 📌 0

Problem 1: If causal-historical chains are doing the explanatory work, then disruptions in those chains would undermine the explanation. 7/20

16.01.2026 14:05 👍 1 🔁 0 💬 1 📌 0

Subword tokenization creates arbitrary subword strings driven by statistical frequency, rather than lexical structure and meaning. This creates two problems for externalist accounts of machine reference. 6/20

16.01.2026 14:05 👍 1 🔁 0 💬 1 📌 0

Obstacle 1 🚧 Words vs tokens. LMs’ tokens might not correspond to the (parts of) expressions that have causal histories, such as characters, morphemes, or words. 5/20

16.01.2026 14:05 👍 1 🔁 0 💬 1 📌 0

But the causal-historical apparatus, developed for proper names and natural kind terms, encounters two main obstacles in applications to machines. 4/20

16.01.2026 14:05 👍 0 🔁 0 💬 1 📌 0

Denying that machine speech and text may be taken to refer, or be about individuals or states of affairs, risks undermining the public enterprise of evaluating LMs’ outputs for pertinence, truth, and other norms. 3/20

16.01.2026 14:05 👍 1 🔁 0 💬 1 📌 0

Why does reference matter? Because we ought to be able to judge whether LMs’ outputs are relevant to a given topic, plausible, true, etc., and we should extend similar judgments to entailed, presupposed, and implicated meanings. 2/20

16.01.2026 14:05 👍 1 🔁 0 💬 1 📌 0
Preview
On the referential capacity of language models: An internalist rejoinder to Mandelkern & Linzen Abstract. Mandelkern and Linzen (2024) argue that words generated by language models (LMs) are linked to causal histories of use within human linguistic communities, and ultimately to their referents....

New paper out in @complingjournal.bsky.social 📄 With @elliot-murphy.bsky.social we respond to Mandelkern & Linzen on whether LMs’ words refer. Their paper prompted us to develop our own positive (internalist) account of machine reference. 🧵 1/20

doi.org/10.1162/COLI...

16.01.2026 14:05 👍 9 🔁 1 💬 1 📌 0

«Meaning is encoded in the brain in multiple formats: as stable, long-term representations of word meanings and as flexible, short-term structures that connect those meanings to sentence roles and to what is being referred to in the current context.»

26.12.2025 15:01 👍 10 🔁 1 💬 0 📌 0

«Into this context—a scientific system already optimized for measurable output, already decades into goal displacement, already reshaping research priorities around metrics rather than problems—arrive large language models.

They did not arrive as disruptors. They arrived as intensifiers.»

16.12.2025 09:32 👍 3 🔁 1 💬 0 📌 0
Post image

¡Feliz de llegar a nuevos lectores con esta traducción al español de ‘Neurolinguística’! Gracias a los traductores, ilustradores y editores de Ediciones UC por este excelente trabajo 🧠🗣️📘🇨🇱 @edicionesuc #neurociencia #cerebro #lenguaje
lea.uc.cl/neurolinguis...

13.12.2025 10:09 👍 3 🔁 1 💬 0 📌 0

A congenial essay by Michael Clune, modulo assumptions on AI’s persistence:

“The skills that future graduates will most need in the AI era (…) are precisely those that are likely to be eroded by inserting AI into the educational process.” @theatlantic.com

www.theatlantic.com/ideas/2025/1...

30.11.2025 11:30 👍 6 🔁 0 💬 0 📌 0

“Current LMs have limited linguistic common sense (…) the capacity to retrieve and exploit the kind of linguistic and world knowledge that would allow them to reliably make sense of complex, underspecified inputs.”

Link to the published paper: aclanthology.org/2025.iwcs-ma...

22.11.2025 13:34 👍 6 🔁 0 💬 0 📌 0