Max Puelma Touzel's Avatar

Max Puelma Touzel

@mptouzel

Staff Research Scientist@Mila/complexdatalab.com Getting at the psycho-social in our digital spaces with models and data with the aim to make better ones mptouzel.github.io correlated diffusion over AI/ML/(MA)RL/psych/soc/media/pol/econ/energy

309
Followers
849
Following
221
Posts
22.09.2023
Joined
Posts Following

Latest posts by Max Puelma Touzel @mptouzel

Cool IB application in memory consolidation.

12.03.2026 01:17 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Very cool! At once principled/"obv. in hindsight" and creative in framing.

12.03.2026 01:16 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Will there be recordings available?

11.03.2026 23:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Spicy!

11.03.2026 23:31 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

There's an easy/lazy criticism of the polycrisis (& many whole-system perspectives) as totalizing and galaxy-brained. This misses the point: the value of system perspectives is to surface synergistic effects and subtleties beyond simple cause-effect bt individual components.Wallerstein got it right.

09.03.2026 17:30 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Hot week for Social/Cultural AI:
- cultural AI workshop in NYC as.nyu.edu/research-cen...
- Social Reasoning and the Ecology of Thought workshop here in Montreal ivado.ca/en/events/so... (attending!)
Cool to see topics germane to our sims/agents/ safety WG Tues11am
www.complexdatalab.com/stamina/

09.03.2026 15:29 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Leadership

07.03.2026 04:40 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

yeah, flows on 2D surfaces are actually higher constrained. See hairy ball theorem among others.

04.03.2026 20:11 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Yikes

03.03.2026 23:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

There will be no apology

03.03.2026 23:42 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Where does this go? Any case I can think of diminishes the stature of the US Navy. If they can't do it, are there mercenary orgs with this kind of clout to fill in? I hope not. I also hope this doesn't send a market signal that the US would pay for one.

03.03.2026 20:59 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Like you can't exclude the possibility that the interviewee is live prompting your questions and reading off the answers? Really?

03.03.2026 20:44 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Words carry little weight w this administration. Amodei felt the DoD will push those red lines no matter what they say, while Altman accepts their "commitment to safety" at face value bc it's convenient for openAI to move in. If, down the line, they cross those red lines, Altman is at the center

28.02.2026 03:48 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

It's phrased like he has kept some condition and is asking for the DoD to accept, but it is not clear to me what that condition is. That they don't get to host the models ("on cloud networks only")?

28.02.2026 03:28 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Kinda bummed that the Trump admin gets to come up with "defective altruism". To whoever it applies: that's a lost opportunity.

28.02.2026 02:02 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image Post image

New paper on a long-shot I've been obsessed with for a year:

How much are AI reasoning gains confounded by expanding the training corpus 10000x? How much LLM performance is down to "shallow" generalisation (approximate pattern-matching to highly-related training data)?

t.co/CH2vP0Y7OF

27.02.2026 17:25 πŸ‘ 63 πŸ” 16 πŸ’¬ 1 πŸ“Œ 2

Forced Optimism will continue until morale improves.

26.02.2026 04:31 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
On the steerability of large language models toward data-driven personas Large language models (LLMs) are known to generate biased responses where the opinions of certain groups and populations are underrepresented. Here, we present a novel approach to achieve controllable...

I recall this paper starting from this as a reason for a need to do more than just prompt with demographics. They present a soft prompting approach based directly on the correlation structure of opinions. arxiv.org/abs/2311.04978

26.02.2026 04:11 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
STAMINA Working Group STAMINA Working Group - Social Tech And ModellIng for kNowledge & Action

STAMINA-Working Group Talk Series
on Agents, Sims, Social Tech, and Safety

πŸ“…Next Tuesday, March 3rd @ 11am ET
πŸ”¬"AI and the Future of Science"
πŸ‘€by Martin Weiss, Tiptree Systems

Check out the website for talk abstract and join us by signing up for updates: www.complexdatalab.com/stamina

25.02.2026 17:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Farming skit featuring David Mitchell describing it as an easy money-making venture. #farming
Farming skit featuring David Mitchell describing it as an easy money-making venture. #farming YouTube video by Agribusiness Insider

I feel like free-riding when the norm is not to is learned from imitation/lack of overt sanctioning at the individual level, not derived in a planning mode and so the behaviour somehow escapes rational scrutiny. Lots of behav looks silly through a greedy lens,e.g. www.youtube.com/shorts/WuBDq...

25.02.2026 15:18 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
The Geometry of Prompting: Unveiling Distinct Mechanisms of Task Adaptation in Language Models Decoder-only language models have the ability to dynamically switch between various computational tasks based on input prompts. Despite many successful applications of prompting, there is very limited...

This has a very cool result on in-context learned classification tasks, where they disentangle representational quality (how well-separated concept labels are) and readout alignment (how good it is at reading out its own inner labels). Adding demo examples helps through readout, not representations!

23.02.2026 20:01 πŸ‘ 36 πŸ” 5 πŸ’¬ 1 πŸ“Œ 0
Preview
Multi-agent cooperation through in-context co-player inference Achieving cooperation among self-interested agents remains a fundamental challenge in multi-agent reinforcement learning. Recent work showed that mutual cooperation can be induced between "learning-aw...

really awesome stuff and in-context best response arxiv.org/abs/2602.16301

19.02.2026 19:44 πŸ‘ 7 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0

it's interesting because it's a strong test of how much persona we can get out of these things. imbue the most "structured expectation" into the context and see what happens.

19.02.2026 17:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Agreed! Was the OP focussed on isolated fine-tuning? I think in-context learning is sufficient here.

19.02.2026 14:11 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Don't you think the feedback mechanism of "AI does bad thing. AI reads people talking about AI doing bad things. AI further caricatures itself and does even more bad things." is at least plausible, perhaps even probable?

19.02.2026 02:13 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

There are nuances here (follow the thread), but it's an important consideration. John Hopfield just won a Nobel, but like Einstein he could have won it for another thing, in this case "kinetic proofreading", which is how genetics handles the same problem of reliable output from noisy machinery.

16.02.2026 06:30 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

The bicycle is an underrated invention

Bikes allow us terrestrial folk to be more like fish in terms of efficiency when travelling, and beat pretty much every organism, as well as all powered vehicles.

buff.ly/vcH5TLu

12.02.2026 17:35 πŸ‘ 63 πŸ” 20 πŸ’¬ 1 πŸ“Œ 7

We need to raise the bar on research code right now.

1) documentation and tests are dead simple now.
2) creating benchmarks integrating across multiple implementations
3) have agents double check your work / fix broken tests
4) fix outstanding bugs in major scientific packages

14.02.2026 15:58 πŸ‘ 57 πŸ” 14 πŸ’¬ 3 πŸ“Œ 0

Yeah that was weird. Some particularly close.

13.02.2026 02:21 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image
13.02.2026 01:54 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0