Matan Mazor's Avatar

Matan Mazor

@matanmazor

Interested in complex systems and in simple systems who believe they are complex systems. Leader of the Oxford Self-Modelling Group (Dept. of Experimental Psychology, University of Oxford);

3,556
Followers
254
Following
573
Posts
16.09.2023
Joined
Posts Following

Latest posts by Matan Mazor @matanmazor

Link seems blocked (occluded!) for non-UPENN people

11.03.2026 17:37 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Academics Need to Wake Up on AI Ten theses for folks who haven't noticed the ground shifting under their feet

Sorry, Bluesky, but I have to say it: AI can already do social science research better than most professors with PhDs. And, for the first time in my life, I really have no idea what happens in five years.

Things are changing already, we just need to wake up.

03.03.2026 00:08 πŸ‘ 187 πŸ” 34 πŸ’¬ 312 πŸ“Œ 279
Keep calm and be transparent: advice from scientists who retracted their papers Retractions correct the scientific record, but they have stigma attached to them. Some in the research community want that to change.

We're thrilled to announce the Ctrl-Z Award, a US$2,500 prize for researchers β€œwho discover substantial errors in their published work and take meaningful steps to correct the scientific record."
Covered by @nature.com today; read more here: centerforscientificintegrity.org/2026/03/10/a...

10.03.2026 15:37 πŸ‘ 458 πŸ” 192 πŸ’¬ 6 πŸ“Œ 22

Unfortunately for me too, it would have been great to see you there. This talk will not be recorded, but I hope we'll have something written soon that I can share.

10.03.2026 18:42 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Public communication alters private confidence Andreassen et al. demonstrate that confidence exhibited in public affects our private assessment of confidence.

How does uncertainty transmit from one head to another? Our new paper out today in @currentbiology.bsky.social reveals how public communication alters private confidence.

w/ Einar Andreassen & @cdfrith.bsky.social

@birkbeckpsychology.bsky.social
@leverhulme.ac.uk

πŸ§ πŸ“ˆ

09.03.2026 16:20 πŸ‘ 58 πŸ” 23 πŸ’¬ 0 πŸ“Œ 0

Thanks Steve! Looking forward to this.

09.03.2026 19:30 πŸ‘ 9 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Lab AI Policy | Todd Gureckis Clear expectations for how every member of our lab should use generative AI tools responsibly, transparently, and in a way that upholds rigorous, reproducible, open science.

draft lab ai policy, feel free to use, modify, or discuss! todd.gureckislab.org/2026/03/06/g...

07.03.2026 02:18 πŸ‘ 50 πŸ” 12 πŸ’¬ 1 πŸ“Œ 1

This seems right to me.

04.03.2026 13:42 πŸ‘ 6 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Online Studies
Psychological Science requires that authors who use samples from online data collection include a statement in the Method section explicitly addressing their approach to preventing and detecting automated or AI-generated responses.

Rationale

As large language models and other generative AI tools become more accessible, the risk of data contamination by non-human respondents has increased dramatically in research. Psychological science (and the social sciences generally) is particularly susceptible to this issue given its growing reliance on online data collection. Preventing automated responses during data collection and detecting them afterward often involve methodological trade-offs. For instance, technical barriers that aim to prevent LLM use (e.g., blocking copy-pasting functionalities) may eliminate behavioral indicators needed for detection (e.g., pasting rather than typing). This policy aims to enhance transparency and reproducibility of reported results by requiring authors to articulate their approach across both prevention and detection dimensions, enabling readers and reviewers to assess the likelihood of reported data being influenced by automated responses.

Scope

This policy applies to any submission with at least one study that includes data collected online without direct human supervision (e.g., via crowdsourcing platforms, student participants who complete the study online, online recruitment ads, or remote survey distribution tools).

Required Reporting

Authors must include in the Methods section either:

A statement confirming that procedures were in place to prevent and/or detect and exclude automated or AI-generated responses, including a description of those procedures (e.g., explicit participant instructions against LLM use, disabled copy–paste functionality, CAPTCHA use, IP filtering, consistency checks, attention checks, adversarial prompting) as well as the types of automated responses that these procedures are suitable …

Online Studies Psychological Science requires that authors who use samples from online data collection include a statement in the Method section explicitly addressing their approach to preventing and detecting automated or AI-generated responses. Rationale As large language models and other generative AI tools become more accessible, the risk of data contamination by non-human respondents has increased dramatically in research. Psychological science (and the social sciences generally) is particularly susceptible to this issue given its growing reliance on online data collection. Preventing automated responses during data collection and detecting them afterward often involve methodological trade-offs. For instance, technical barriers that aim to prevent LLM use (e.g., blocking copy-pasting functionalities) may eliminate behavioral indicators needed for detection (e.g., pasting rather than typing). This policy aims to enhance transparency and reproducibility of reported results by requiring authors to articulate their approach across both prevention and detection dimensions, enabling readers and reviewers to assess the likelihood of reported data being influenced by automated responses. Scope This policy applies to any submission with at least one study that includes data collected online without direct human supervision (e.g., via crowdsourcing platforms, student participants who complete the study online, online recruitment ads, or remote survey distribution tools). Required Reporting Authors must include in the Methods section either: A statement confirming that procedures were in place to prevent and/or detect and exclude automated or AI-generated responses, including a description of those procedures (e.g., explicit participant instructions against LLM use, disabled copy–paste functionality, CAPTCHA use, IP filtering, consistency checks, attention checks, adversarial prompting) as well as the types of automated responses that these procedures are suitable …

Maybe of interest: The submission guidelines of Psychological Science now demand an explicit statement on measures taken to reduce the risk of AI-generated responses for all online studies!

www.psychologicalscience.org/publications...

25.02.2026 12:08 πŸ‘ 124 πŸ” 53 πŸ’¬ 1 πŸ“Œ 0
Preview
Postdoctoral researcher on single neuron recordings for consciousness Contract period: 18 monthsExpected date of employment: June 2026Proportion of work: Full timeSalary: according to INSERM scales (~2900 – 3300 € gross / month based on experience)Desired level of educa...

#hiring
Come work with us to better understand the neuronal mechanism underlying perceptual consciousness!

18 months postdoctoral position at INSERM in Grenoble, France.

βŒ›Application deadline 10 March 2026.

euraxess.ec.europa.eu/jobs/408445

24.02.2026 20:50 πŸ‘ 29 πŸ” 24 πŸ’¬ 1 πŸ“Œ 2
Preview
The age of animal experiments is waning. Where will science go next? Advances in organ and computer models are raising the prospect that some animal experiments could be eliminated. But there are still huge hurdles to overcome.

Ethical and animal-welfare concerns have long fuelled efforts to curb animal use in research β€” and now rapid advances in alternative scientific methods are accelerating the shift

go.nature.com/3P0OtCB

25.02.2026 12:04 πŸ‘ 44 πŸ” 17 πŸ’¬ 1 πŸ“Œ 5
Book cover. A silhouette of a person's head filled with colorful geometric shapesβ€”perhaps symbolizing cognitive resources or deployment thereof. The style is attractive and modern, if generic.

text: 
The Rational Use of Cognitive Resources
Falk Lieder, Frederick Callaway, Thomas L. Griffithts

Book cover. A silhouette of a person's head filled with colorful geometric shapesβ€”perhaps symbolizing cognitive resources or deployment thereof. The style is attractive and modern, if generic. text: The Rational Use of Cognitive Resources Falk Lieder, Frederick Callaway, Thomas L. Griffithts

I'm excited to announce that I had my first (co-authored) book published today! "The Rational Use of Cognitive Resources" with Falk Lieder and Tom Griffiths (@cocoscilab.bsky.social ). You can read it for free! (see thread)

18.02.2026 01:05 πŸ‘ 142 πŸ” 45 πŸ’¬ 2 πŸ“Œ 0

@summerfieldlab.bsky.social and I are very happy to share this paper! Building on work by @scychan.bsky.social, we show that how people learn depends on the distribution of examples they see, and changes in a way that’s very similar to transformer models.

06.01.2026 11:16 πŸ‘ 20 πŸ” 8 πŸ’¬ 1 πŸ“Œ 0

πŸ‘€

14.02.2026 17:58 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Contents of visual predictions oscillate at alpha frequencies Predictions of future events have a major impact on how we process sensory signals. However, it remains unclear how the brain keeps predictions online in anticipation of future inputs. Here, we combin...

@dotproduct.bsky.social's first first author paper is finally out in @sfnjournals.bsky.social! Her findings show that content-specific predictions fluctuate with alpha frequencies, suggesting a more specific role for alpha oscillations than we may have thought. With @jhaarsma.bsky.social. 🧠🟦 πŸ§ πŸ€–

21.10.2025 11:05 πŸ‘ 113 πŸ” 44 πŸ’¬ 7 πŸ“Œ 3
Video thumbnail

Very happy to see "Pretending not to know reveals a capacity for model-based self-simulation", a collaboration with @chazfirestone.bsky.social and @ianbphillips.bsky.social, out in Psych. Science!

journals.sagepub.com/doi/10.1177...

🧡

10.02.2026 17:25 πŸ‘ 67 πŸ” 30 πŸ’¬ 1 πŸ“Œ 3
Post image

New Preprint with @matanmazor.bsky.social: Overcoming both bias and sycophancy requires LLMs to imagine not knowing something they know. Like humans, they struggle with this. But unlike humans, LLMs can do something remarkable: they can, quite simply, *ask their counterfactual selves*.

10.02.2026 17:24 πŸ‘ 10 πŸ” 4 πŸ’¬ 1 πŸ“Œ 2
APA PsycNet

Perhaps relevant?

psycnet.apa.org/buy/2019-730...

12.02.2026 21:01 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Happy birthday to one of my favourite haters, Charles Darwin

12.02.2026 16:31 πŸ‘ 10351 πŸ” 3081 πŸ’¬ 162 πŸ“Œ 419
Video thumbnail

Today, UCL turns 200. πŸŽ‰ For two centuries, our community has opened doors, challenged convention and pushed the boundaries of knowledge across every discipline. Thank you to everyone who’s been part of the UCL story, here’s to the next century. ✨

#UCL200 #LoveUCL

11.02.2026 08:04 πŸ‘ 139 πŸ” 80 πŸ’¬ 3 πŸ“Œ 26
Post image

checking the status of my manuscript

11.02.2026 14:03 πŸ‘ 130 πŸ” 15 πŸ’¬ 2 πŸ“Œ 0

Introducing β€œPretend Battleship”: you’re told where all the ships are but then have to play like you never got that information. Could you do it? And what would your performance reveal about your understanding of your own mind? A joy to be part of this creative project led by @matanmazor.bsky.social

10.02.2026 20:40 πŸ‘ 30 πŸ” 8 πŸ’¬ 0 πŸ“Œ 1

We don't have this functionality now, but something I'm working on! Did you do pretend-Hangman? Do you remember the word you had in the half-game?

11.02.2026 12:51 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

But in both bias and sycophancy scenarios, LLMs can do something amazing that people cannot. They can use their own API to simply *ask themselves* the question with the relevant information redacted. We show this literal self-simulation vastly outperforms prompting.

10.02.2026 17:24 πŸ‘ 2 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0

Give it a try here!
self-model.github.io/pretendingNo...

10.02.2026 17:54 πŸ‘ 9 πŸ” 4 πŸ’¬ 1 πŸ“Œ 0

e.g. see @brianchristian.bsky.social's hot-off-the-press thread about our latest preprint, showing that, unlike humans, large language models can literally step behind the veil of ignorance using self-calling:
bsky.app/profile/bria...

10.02.2026 17:33 πŸ‘ 6 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Oxford Self-Modelling Group Dr Matan Mazor

We currently pursue some of these directions, + additional ones, at the Oxford Self-Modelling Group:

www.psy.ox.ac.uk/research/oxf...

10.02.2026 17:31 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

legal settings ("please ignore this last witness's testimony"), and more. For this reason, understanding what people can and cannot accurately simulate has important societal consequences.

10.02.2026 17:25 πŸ‘ 7 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

But also, simulating a state of ignorance is central to effective teaching and communication ("would I have understood my own instructions?"), fairness ("would I have thought this was a good policy if I hadn't known it benefited me?"),

10.02.2026 17:25 πŸ‘ 8 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I think this work is important for two reasons. First, pretending not to know is a promising way to study the scope and limitations of people's models of their own minds: their self-models.

10.02.2026 17:25 πŸ‘ 6 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0