Check out the Call for Workshops at RLC this year. There is still nearly a month before the deadline, we are looking forward to your proposals!
Check out the Call for Workshops at RLC this year. There is still nearly a month before the deadline, we are looking forward to your proposals!
What is the relationship between memorization and generalization in AI? Is there a fundamental tradeoff? In infinitefaculty.substack.com/p/memorizati... Iβve reviewed some of the evolving perspectives on memorization & generalization in machine learning, from classic perspectives through LLMs.
Excited to share that our work on Visual Symbolic Mechanisms has been accepted to ICLR! π΄π§π·
Huge congratulations! Already looking forward to the breakthroughs you'll lead!
Great deep dive into hierarchical reinforcement learning, essential reading for anyone exploring scalable, structured agents. Shoutout to the authors!
Many LMs default to disjunctive inferencesβeven when given conjunctive evidence. Unlike children, LMsβ exploration is shaped by the underlying causal rule.
Could promoting child-like curiosity help LMs reason more effectively about causality?
More details: arxiv.org/abs/2505.09614
A great collab with former labmates @agx-chen.bsky.social & @dongyanl1n.bsky.social.
Interesting limitation in LMs: strong disjunctive bias leads to poor performance on conjunctive causal inference tasks. Mirrors adult human biases--possibly a byproduct of training data prior.
Just over a week since I defended my π€+π§ PhD thesis, and the feeling is just sinking in. Extremely grateful to
@tyrellturing.bsky.social for supporting me through this amazing journey! π
Big thanks to all members of the LiNC lab, and colleagues at mcgill University and @mila-quebec.bsky.social. β€οΈπ
The slides of my NeurIPS lecture "From Diffusion Models to SchrΓΆdinger Bridges - Generative Modeling meets Optimal Transport" can be found here
drive.google.com/file/d/1eLa3...
I gave a talk on Compositional World Models at NeurIPS last week π
The recording is now online: neurips.cc/virtual/2024... (for registered attendees; starts at 6:06:00)
Workshop: compositional-learning.github.io
π Super excited to announce the first ever Frontiers of Probabilistic Inference: Learning meets Sampling workshop at #ICLR2025 @iclr-conf.bsky.social!
π website: sites.google.com/view/fpiwork...
π₯ Call for papers: sites.google.com/view/fpiwork...
more details in thread belowπ π§΅
If you review for a #ML conference like @iclr-conf.bsky.social or @neuripsconf.bsky.social, YOU HAVE A RESPONSIBILITY TO REPLY TO THE AUTHORS.
If the rebuttal doesn't address your concerns explain why. But giving a score of 2-3 then ghosting the authors is super rude.
I say this as an AC.
#MLSky
Now that @jeffclune.bsky.social and @joelbot3000.bsky.social are here, time for an Open-Endedness starter pack.
go.bsky.app/MdVxrtD
4/ I think the proper definition is that #NeuroAI is the realization of the original promise of cybernetics and connectionism!
en.wikipedia.org/wiki/Cyberne...
en.wikipedia.org/wiki/Connect...
It is a general science of intelligence focussed on parallel distributed systems, control, and learning.
1/ I work in #NeuroAI, a growing field of research, which many people have only the haziest conception of...
As way of introduction to this research approach, I'll provide here a very short thread outlining the definition of the field I gave recently at our BRAIN NeuroAI workshop at the NIH.
π§ π
Can you please add me as well?
I wish that I was able to convince the students in my class that the ultimate goal of science isn't accumulation of facts but the compression of facts into theories. Mel summarizes it well here:
www.nature.com/articles/nn1...
Unfortunately, I think this is a minority viewpoint within neuroscience.