Hot tip: if you're tired or distracted and having trouble following a paper you're reading, this trick can also help a great deal with that.
(Not always a good idea if you're sitting in a library of course...)
@marekmcgann
Cognitive scientist. Teacher. Nerd. Cognitive science of the enactive, ecological, and (redundantly) embodied sort. Also, some stuff on scientific practice in psychology. I co-convene these: https://www.ensoseminars.com (he/him)
Hot tip: if you're tired or distracted and having trouble following a paper you're reading, this trick can also help a great deal with that.
(Not always a good idea if you're sitting in a library of course...)
I started copying @conjh.bsky.social's version of this trick for writing science papers: sit down with the other lead authors and one person reads the manuscript draft out line by line, paragraph by paragraph, then perform live surgery together on shared doc. Incredibly good way to improve the text
If you want to take your mind off awful politics and look at awful science stuff instead, this is a good read: www.sciencedetective.org/scientific-d...
Our investigation actually shows how *difficult* it is to make paid edits on Wikipedia, even when PR firms try. As we wrote in the piece: โTo put it simply: it is hard to publish misinformation on Wikipedia.โ
The truth matters. (2/2)
www.thebureauinvestigates.com/stories/2026...
Even if they did so in compliance with all laws and the Term of Service, this will bite Proton hard. Their sales is you can trust them and accounts are private.
On a practical level however, a reminder for email as a service that canโt be true.
*Interesting aesthetic assessment there -- when contemporary AI slop becomes nostalgic dead-media, as of course it will
medium.com/ai-in-plain-...
This is a good analogy. Very useful in carefully controlled settings for specific usages, but unfortunately being widely inserted into everything and used without training or PPE and poisoning a lot of things and people and weโre going to pay a tonne of money to remove it safely in the future
Great post on the emergence of clinical trials!
This section makes me wonder about "professionalization" in science in general. I do feel like many parts of the research process in my field are...surprisingly dilettantish.>
Over the past decade, the robustness of the ego-depletion effect has been widely questioned. Possible reasons for variations in the ego-depletion effect may be participant expectations of the demand of the upcoming task and experimenter expectations or demand bias. In three experiments we tested the hypothesis that the ego-depletion effect is partly or exclusively attributable to (i) participantsโ expectations of the task (Studies 1a and b) and (ii) experimental demand bias (Study 2). In all studies we did not observe a robust ego-depletion effect, and only participants informed that the task was tiring exhibited the effect. Taken together, our findings suggest that participant and experimenter expectations can influence performance in ego-depletion paradigms. However, more research is necessary to determine the extent to which these expectationsโrather than other social-motivational factorsโdrive the effect.
New study finds demand and experimenter bias partly explain the ego-depletion effect
Journal:
doi.org/10.1007/s121...
Open access: papers.ssrn.com/sol3/papers....
By @oulmann.bsky.social, @martinhagger.bsky.social, et al.
โ๏ธ #WavyWednesday
โTwo figures showing a pencil drawn map with points A and B and a mountain range. On the left, the mountain range is in between points A and B. On the right, the part containing the mountain range has been cut out, showing the authorโs desk through the hole cut in the map.โ Figure 12 in: van de Braak, L., van Rooij, I., Dingemanse, M., Toni, I., & Blokpoel, M. (2025). Understanding misunderstanding: How quick-fix solutions undermine explanation. Zenodo. https://doi.org/10.5281/zenodo.17152893
New preprint from the lab!
๐ฃ๏ธ ๐ค ๐ป ๐บ๏ธ
๐ van de Braak, van Rooij, Dingemanse, Toni & Blokpoel (2025). Understanding misunderstanding: How quick-fix solutions undermine explanation. doi.org/10.5281/zeno... ๐
cc @irisvanrooij.bsky.social @dingemansemark.bsky.social @blokpoel.bsky.social
*You may notice that, even though I read a whole lotta David Gerard, I don't presume to *be* David Gerard. #MargaretCavendishSyndrome #dontdothat #really #scifiwriters
@braininspired.bsky.social talks to @dewitmm.bsky.social, Luis Favela and @diovicen.bsky.social about the trend of neuroscientists importing concepts from ecological psychology, how an organismโs interactions with its environment explains perception and action.
#neuroskyence
bit.ly/3OyNEkp
This is exactly right. The Onion quietly left Twitter a month ago and... our weekly subscribers went up. It's because we're doing well here, on Instagram and on YouTube.
As a business, being on Twitter is somewhere between useless or detrimental, unless you're selling boner pills.
quick shout out to @plos.org
They got rid of their X button: "Share" now lists: facebook, reddit, linkedin and bluesky
Look how easy it is! Now get the hell off X. Delete the icon. Our scientific community has already mostly abandoned the "Nazi bar"
plos.org/our-journals/
*There's something to this, but the "us" is problematic because events have taken a genuine "posthuman turn"
*The "intentional thoughtfulness" is more LLM than it's human; they hallucinate a lot, but they're all about analyzing, compressing and mimicking that form of behavior
Summer School: Critical AI Literacies for Resisting and Reclaiming irisvanrooijcogsci.com/2026/02/18/s... cc @olivia.science @marentierra.bsky.social
I just did the dumbest thing of my entire career to prove a much more serious point.
I tricked ChatGPT and Google, and made them tell other users Iโm a competitive hot-dog-eating world champion
People are using this trick on a massive scale to make AI tell you lies. Iโll explain how I did it
I love this statement by @catebridget.bsky.social re AI
Ludger van Dijk and Julian Kiverstein have an interesting paper that's relevant here - doi.org/10.1007/s112... (and I'm working on a somewhat related piece, and have touched on it before here: www.frontiersin.org/articles/10....). The concept of a medium in psychology is underappreciated.
Reads: Most importantly, there is no AI without massive financial and ideological backing. It is therefore pointless to discuss its techniques or capabilities without asking who controls it, who benefits from it, who builds and deploys it, and what it is doing in the world. As Stafford Beer (2002) argued, the purpose of a system is what it does.
Reads: Though less explicit than Thielโs call to replace politics with technology, major tech firms have effectively privatised core digital public goods. Platforms like Facebook, Google Search, and OpenAIโs ChatGPT operate at infrastructural scale in Ireland, shaping information, communication, and access to knowledge. Yet their algorithms remain opaque, their governance remains private, with minimal democratic accountability to the public who depend on them; effectively ceding aspects of democratic process to commercial interests. The monopolization of digital spaces has turned democracy into something the highest bidder can buy and is degrading the digital public goods themselves. As the AI industry, social media and search platforms grow more extractive and less trustworthy, they erode the foundations of democratic life: trust, dialogue, and accountability, blurring the line between truth and falsehood. An example is the deepfake video falsely showing President Catherine Connolly withdrawing from the presidential race last October, which amassed over 160,0001 Facebook views before being removed. GenAIโs non-deterministic, stochastic architecture produces plausible output without regard for accuracy or truth. This makes generative AI a societal disaster and a major threat to truth, democratic processes, information ecosystems, knowledge production, and the social fabric
Reads: For truth, democracy, and the rule of law to endure in the AI era, we need to cultivate an ecosystem of transparency and accountability. Yet governance by algorithms inherently places our digital public squares and democratic processes in the hands of those building these systems in line with their political and profit-seeking agendas. Without real mechanisms in place, talk of transparency and accountability are empty gestures. An internal Meta memo outlining plans to launch facial recognition in smart glasses โduring a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concernsโ5 illustrates how those advocating for accountability are under-resourced, retaliated against, and targeted. Large tech and AI companies, despite selling promises of innovation and societal benefit, monetize and undermine the very society they claim to serve. What is needed is not just regulation, but active enforcement. Given the track record of tech giants, stricter regulation and enforcement is not โantiโfreedom of speechโ or anti-competitiveness. It is one of the clearest ways governments can show they serve the public interest. After all, innovation that disregards truth and democratic processes risks undermining democracy itself.
I appeared as an expert witness before the Joint Committee on AI at the Houses of Oireachtas (parliament of Ireland) to discuss "AI: truth and democracy" this morning. You can read my opening statement here: www.oireachtas.ie/en/publicati...
Cรบchullain didn't die defending Ireland from Margaret Thatcher at the Battle Of Clontarf for you to put something other than plain, Catholic sugar on your pancakes
"Stochastic Parrot" is such a good explainer for how LLMs work that Sam Altman's only rebuttal was claiming that humans are as well (which is patently false).
An informative metaphor is not "derogatory."
Pleased to share my new (and first soleโauthored) article on how people in Northern Ireland experience identity in fluid, complex ways. Interviews in DerryโLondonderry show why โtwoโcommunityโ narratives fall short.
Kudos summary: link.growkudos.com/1e8u77s464g
DOI: psycnet.apa.org/doi/10.1037/...
We absolutely need more researchers throwing themselves around in mosh pits for science.
(I do genuinely think that some level of field research, even just at the early stages, ought to be a prerequisite for a social sciences PhD. Let's touch grass and put the open-air back in open science.)
LLMs good performance on medical exams does not translate to accurate performance in real-world settings (preregistered n~1,300 study). This can't be explained by current standard benchmarks for medical knowledge & simulated patient interactions.
www.nature.com/articles/s41...
We are demanding that EU co-legislators reject attempts in the AI Omnibus to remove a key transparency safeguard from the AI Act.
We cannot open a loophole that would let providers exempt themselves from the AI Actโs high risk requirements with no transparency www.accessnow.org/press-releas...
โ๏ธ Summer School ๐
โCritical AI Literacies for Resisting and Reclaiming"
Organisers and teachers:
๐ @marentierra.bsky.social
๐ @olivia.science
๐ myself
Deadline for application:
๐ฆ 31 March 2026 (early bird fee)
1/๐งต
www.ru.nl/en/education...
Step 1) Hype your "AI" as an all-in-one question answering truth machine
Step 2) Any time things go wrong from people using your "AI" as an all-in-1 question answering truth machine, tell them they shouldn't've done that; didn't they read the fine print???
Step 3) Truly Disgustingly Egregious Profit
weekly reminder; happy Monday
Have you considered
NOT using
AI?