Cool IB application in memory consolidation.
@mptouzel
Staff Research Scientist@Mila/complexdatalab.com Getting at the psycho-social in our digital spaces with models and data with the aim to make better ones mptouzel.github.io correlated diffusion over AI/ML/(MA)RL/psych/soc/media/pol/econ/energy
Cool IB application in memory consolidation.
Very cool! At once principled/"obv. in hindsight" and creative in framing.
Will there be recordings available?
Spicy!
There's an easy/lazy criticism of the polycrisis (& many whole-system perspectives) as totalizing and galaxy-brained. This misses the point: the value of system perspectives is to surface synergistic effects and subtleties beyond simple cause-effect bt individual components.Wallerstein got it right.
Hot week for Social/Cultural AI:
- cultural AI workshop in NYC as.nyu.edu/research-cen...
- Social Reasoning and the Ecology of Thought workshop here in Montreal ivado.ca/en/events/so... (attending!)
Cool to see topics germane to our sims/agents/ safety WG Tues11am
www.complexdatalab.com/stamina/
Leadership
yeah, flows on 2D surfaces are actually higher constrained. See hairy ball theorem among others.
Yikes
There will be no apology
Where does this go? Any case I can think of diminishes the stature of the US Navy. If they can't do it, are there mercenary orgs with this kind of clout to fill in? I hope not. I also hope this doesn't send a market signal that the US would pay for one.
Like you can't exclude the possibility that the interviewee is live prompting your questions and reading off the answers? Really?
Words carry little weight w this administration. Amodei felt the DoD will push those red lines no matter what they say, while Altman accepts their "commitment to safety" at face value bc it's convenient for openAI to move in. If, down the line, they cross those red lines, Altman is at the center
It's phrased like he has kept some condition and is asking for the DoD to accept, but it is not clear to me what that condition is. That they don't get to host the models ("on cloud networks only")?
Kinda bummed that the Trump admin gets to come up with "defective altruism". To whoever it applies: that's a lost opportunity.
New paper on a long-shot I've been obsessed with for a year:
How much are AI reasoning gains confounded by expanding the training corpus 10000x? How much LLM performance is down to "shallow" generalisation (approximate pattern-matching to highly-related training data)?
t.co/CH2vP0Y7OF
Forced Optimism will continue until morale improves.
I recall this paper starting from this as a reason for a need to do more than just prompt with demographics. They present a soft prompting approach based directly on the correlation structure of opinions. arxiv.org/abs/2311.04978
STAMINA-Working Group Talk Series
on Agents, Sims, Social Tech, and Safety
π
Next Tuesday, March 3rd @ 11am ET
π¬"AI and the Future of Science"
π€by Martin Weiss, Tiptree Systems
Check out the website for talk abstract and join us by signing up for updates: www.complexdatalab.com/stamina
I feel like free-riding when the norm is not to is learned from imitation/lack of overt sanctioning at the individual level, not derived in a planning mode and so the behaviour somehow escapes rational scrutiny. Lots of behav looks silly through a greedy lens,e.g. www.youtube.com/shorts/WuBDq...
This has a very cool result on in-context learned classification tasks, where they disentangle representational quality (how well-separated concept labels are) and readout alignment (how good it is at reading out its own inner labels). Adding demo examples helps through readout, not representations!
really awesome stuff and in-context best response arxiv.org/abs/2602.16301
it's interesting because it's a strong test of how much persona we can get out of these things. imbue the most "structured expectation" into the context and see what happens.
Agreed! Was the OP focussed on isolated fine-tuning? I think in-context learning is sufficient here.
Don't you think the feedback mechanism of "AI does bad thing. AI reads people talking about AI doing bad things. AI further caricatures itself and does even more bad things." is at least plausible, perhaps even probable?
There are nuances here (follow the thread), but it's an important consideration. John Hopfield just won a Nobel, but like Einstein he could have won it for another thing, in this case "kinetic proofreading", which is how genetics handles the same problem of reliable output from noisy machinery.
The bicycle is an underrated invention
Bikes allow us terrestrial folk to be more like fish in terms of efficiency when travelling, and beat pretty much every organism, as well as all powered vehicles.
buff.ly/vcH5TLu
We need to raise the bar on research code right now.
1) documentation and tests are dead simple now.
2) creating benchmarks integrating across multiple implementations
3) have agents double check your work / fix broken tests
4) fix outstanding bugs in major scientific packages
Yeah that was weird. Some particularly close.