Link seems blocked (occluded!) for non-UPENN people
Link seems blocked (occluded!) for non-UPENN people
Sorry, Bluesky, but I have to say it: AI can already do social science research better than most professors with PhDs. And, for the first time in my life, I really have no idea what happens in five years.
Things are changing already, we just need to wake up.
We're thrilled to announce the Ctrl-Z Award, a US$2,500 prize for researchers βwho discover substantial errors in their published work and take meaningful steps to correct the scientific record."
Covered by @nature.com today; read more here: centerforscientificintegrity.org/2026/03/10/a...
Unfortunately for me too, it would have been great to see you there. This talk will not be recorded, but I hope we'll have something written soon that I can share.
How does uncertainty transmit from one head to another? Our new paper out today in @currentbiology.bsky.social reveals how public communication alters private confidence.
w/ Einar Andreassen & @cdfrith.bsky.social
@birkbeckpsychology.bsky.social
@leverhulme.ac.uk
π§ π
Thanks Steve! Looking forward to this.
draft lab ai policy, feel free to use, modify, or discuss! todd.gureckislab.org/2026/03/06/g...
This seems right to me.
Online Studies Psychological Science requires that authors who use samples from online data collection include a statement in the Method section explicitly addressing their approach to preventing and detecting automated or AI-generated responses. Rationale As large language models and other generative AI tools become more accessible, the risk of data contamination by non-human respondents has increased dramatically in research. Psychological science (and the social sciences generally) is particularly susceptible to this issue given its growing reliance on online data collection. Preventing automated responses during data collection and detecting them afterward often involve methodological trade-offs. For instance, technical barriers that aim to prevent LLM use (e.g., blocking copy-pasting functionalities) may eliminate behavioral indicators needed for detection (e.g., pasting rather than typing). This policy aims to enhance transparency and reproducibility of reported results by requiring authors to articulate their approach across both prevention and detection dimensions, enabling readers and reviewers to assess the likelihood of reported data being influenced by automated responses. Scope This policy applies to any submission with at least one study that includes data collected online without direct human supervision (e.g., via crowdsourcing platforms, student participants who complete the study online, online recruitment ads, or remote survey distribution tools). Required Reporting Authors must include in the Methods section either: A statement confirming that procedures were in place to prevent and/or detect and exclude automated or AI-generated responses, including a description of those procedures (e.g., explicit participant instructions against LLM use, disabled copyβpaste functionality, CAPTCHA use, IP filtering, consistency checks, attention checks, adversarial prompting) as well as the types of automated responses that these procedures are suitable β¦
Maybe of interest: The submission guidelines of Psychological Science now demand an explicit statement on measures taken to reduce the risk of AI-generated responses for all online studies!
www.psychologicalscience.org/publications...
#hiring
Come work with us to better understand the neuronal mechanism underlying perceptual consciousness!
18 months postdoctoral position at INSERM in Grenoble, France.
βApplication deadline 10 March 2026.
euraxess.ec.europa.eu/jobs/408445
Ethical and animal-welfare concerns have long fuelled efforts to curb animal use in research β and now rapid advances in alternative scientific methods are accelerating the shift
go.nature.com/3P0OtCB
Book cover. A silhouette of a person's head filled with colorful geometric shapesβperhaps symbolizing cognitive resources or deployment thereof. The style is attractive and modern, if generic. text: The Rational Use of Cognitive Resources Falk Lieder, Frederick Callaway, Thomas L. Griffithts
I'm excited to announce that I had my first (co-authored) book published today! "The Rational Use of Cognitive Resources" with Falk Lieder and Tom Griffiths (@cocoscilab.bsky.social ). You can read it for free! (see thread)
@summerfieldlab.bsky.social and I are very happy to share this paper! Building on work by @scychan.bsky.social, we show that how people learn depends on the distribution of examples they see, and changes in a way thatβs very similar to transformer models.
π
@dotproduct.bsky.social's first first author paper is finally out in @sfnjournals.bsky.social! Her findings show that content-specific predictions fluctuate with alpha frequencies, suggesting a more specific role for alpha oscillations than we may have thought. With @jhaarsma.bsky.social. π§ π¦ π§ π€
Very happy to see "Pretending not to know reveals a capacity for model-based self-simulation", a collaboration with @chazfirestone.bsky.social and @ianbphillips.bsky.social, out in Psych. Science!
journals.sagepub.com/doi/10.1177...
π§΅
New Preprint with @matanmazor.bsky.social: Overcoming both bias and sycophancy requires LLMs to imagine not knowing something they know. Like humans, they struggle with this. But unlike humans, LLMs can do something remarkable: they can, quite simply, *ask their counterfactual selves*.
Happy birthday to one of my favourite haters, Charles Darwin
Today, UCL turns 200. π For two centuries, our community has opened doors, challenged convention and pushed the boundaries of knowledge across every discipline. Thank you to everyone whoβs been part of the UCL story, hereβs to the next century. β¨
#UCL200 #LoveUCL
checking the status of my manuscript
Introducing βPretend Battleshipβ: youβre told where all the ships are but then have to play like you never got that information. Could you do it? And what would your performance reveal about your understanding of your own mind? A joy to be part of this creative project led by @matanmazor.bsky.social
We don't have this functionality now, but something I'm working on! Did you do pretend-Hangman? Do you remember the word you had in the half-game?
But in both bias and sycophancy scenarios, LLMs can do something amazing that people cannot. They can use their own API to simply *ask themselves* the question with the relevant information redacted. We show this literal self-simulation vastly outperforms prompting.
Give it a try here!
self-model.github.io/pretendingNo...
e.g. see @brianchristian.bsky.social's hot-off-the-press thread about our latest preprint, showing that, unlike humans, large language models can literally step behind the veil of ignorance using self-calling:
bsky.app/profile/bria...
We currently pursue some of these directions, + additional ones, at the Oxford Self-Modelling Group:
www.psy.ox.ac.uk/research/oxf...
legal settings ("please ignore this last witness's testimony"), and more. For this reason, understanding what people can and cannot accurately simulate has important societal consequences.
But also, simulating a state of ignorance is central to effective teaching and communication ("would I have understood my own instructions?"), fairness ("would I have thought this was a good policy if I hadn't known it benefited me?"),
I think this work is important for two reasons. First, pretending not to know is a promising way to study the scope and limitations of people's models of their own minds: their self-models.