@dataonbrainmind.bsky.social starting now in Room 10 with opening remarks from @crji.bsky.social and the first invited talk from @dyamins.bsky.social!
@dataonbrainmind.bsky.social starting now in Room 10 with opening remarks from @crji.bsky.social and the first invited talk from @dyamins.bsky.social!
๐จ Deadline Extended ๐จ
The submission deadline for the Data on the Brain & Mind Workshop (NeurIPS 2025) has been extended to Sep 8 (AoE)! ๐ง โจ
We invite you to submit your findings or tutorials via the OpenReview portal:
openreview.net/group?id=Neu...
๐ข 10 days left to submit to the Data on the Brain & Mind Workshop at #NeurIPS2025!
๐ Call for:
โข Findings (4 or 8 pages)
โข Tutorials
If youโre submitting to ICLR or NeurIPS, consider submitting here tooโand highlight how to use a cog neuro dataset in our tutorial track!
๐ data-brain-mind.github.io
๐จ Excited to announce our #NeurIPS2025 Workshop: Data on the Brain & Mind
๐ฃ Call for: Findings (4- or 8-page) + Tutorials tracks
๐๏ธ Speakers include @dyamins.bsky.social @lauragwilliams.bsky.social @cpehlevan.bsky.social
๐ Learn more: data-brain-mind.github.io
This is an excellent and very clear piece from Sergey Levine about the strengths and limitations of Large Language models.
sergeylevine.substack.com/p/language-m...
Normalizing Flows (NFs) check all boxes for RL: exact likelihoods (imitation learning), efficient sampling (real-time control), and variational inference (Q-learning)! Yet they are overlooked over more expensive and less flexible contemporaries like diffusion models.
Are NFs fundamentally limited?
How can agents trained to reach (temporally) nearby goals generalize to attain distant goals?
Come to our #ICLR2025 poster now to discuss ๐ฉ๐ฐ๐ณ๐ช๐ป๐ฐ๐ฏ ๐จ๐ฆ๐ฏ๐ฆ๐ณ๐ข๐ญ๐ช๐ป๐ข๐ต๐ช๐ฐ๐ฏ!
w/ @crji.bsky.social and @ben-eysenbach.bsky.social
๐Hall 3 + Hall 2B #637
๐จOur new #ICLR2025 paper presents a unified framework for intrinsic motivation and reward shaping: they signal the value of the RL agentโs state๐ค=external state๐+past experience๐ง . Rewards based on potentials over the learning agentโs state provably avoid reward hacking!๐งต
Thanks to incredible collaborators Bill Zheng, Anca Dragan, Kuan Fang, and Sergey Levine!
Website: tra-paper.github.io
Paper: arxiv.org/pdf/2502.05454
...but to create truly autonomous self-improving agents, we must not only imitate, but also ๐ช๐ฎ๐ฑ๐ณ๐ฐ๐ท๐ฆ upon the training capabilities. Our findings suggest that this improvement might emerge from better task representations, rather than more complex learning algorithms. 7/
๐๐ฉ๐บ ๐ฅ๐ฐ๐ฆ๐ด ๐ต๐ฉ๐ช๐ด ๐ฎ๐ข๐ต๐ต๐ฆ๐ณ? Recent breakthroughs in both end-to-end robot learning and language modeling have been enabled not through complex TD-based reinforcement learning objectives, but rather through scaling imitation with large architectures and datasets... 6/
We validated this in simulation. Across offline RL benchmarks, imitation using our TRA task representations outperformed standard behavioral cloning-especially for stitching tasks. In many cases, TRA beat "true" value-based offline RL, using only an imitation loss. 5/
Successor features have long been known to boost RL generalization (Dayan, 1993). Our findings suggest something stronger: successor task representations produce emergent capabilities beyond training even without RL or explicit subtask decomposition. 4/
This trick encourages a form of time invariance during learning: both nearby and distant goals are represented similarly. By additionally aligning language instructions ๐(โ) to the goal representations ๐(๐), the policy can also perform new compound language tasks. 3/
What does temporal alignment mean? When training, our policy imitates the human actions that lead to the end goal ๐ of a trajectory. Rather than training on the raw goals, we use a representation ๐(๐) that aligns with the preceding state โsuccessor featuresโ ๐(๐ ). 2/
Current robot learning methods are good at imitating tasks seen during training, but struggle to compose behaviors in new ways. When training imitation policies, we found something surprisingโusing temporally-aligned task representations enabled compositional generalization. 1/
Excited to share new work led by @vivekmyers.bsky.social and @crji.bsky.social that proves you can learn to reach distant goals by solely training on nearby goals. The key idea is a new form of invariance. This invariance implies generalization w.r.t. the horizon.
Want to see an agent carry out long horizons tasks when only trained on short horizon trajectories?
We formalize and demonstrate this notion of *horizon generalization* in RL.
Check out our website! horizon-generalization.github.io
With wonderful collaborators @crji.bsky.social, @ben-eysenbach.bsky.social !
Paper: arxiv.org/abs/2501.02709
Website: horizon-generalization.github.io
Code: github.com/vivekmyers/h...
What does this mean in practice? To generalize to long-horizon goal-reaching behavior, we should consider how our GCRL algorithms and architectures enable invariance to planning. When possible, prefer architectures like quasimetric networks (MRN, IQE) that enforce this invariance. 6/
Empirical results support this theory. The degree of planning invariance and horizon generalization is correlated across environments and GCRL methods. Critics parameterized as a quasimetric distance indeed tend to generalize the most over horizon. 5/
Similar to how CNN architectures exploit the inductive bias of translation-invariance for image classification, RL policies can enforce planning invariance by using a *quasimetric* critic parameterization that is guaranteed to obey the triangle inequality. 4/
The key to achieving horizon generalization is *planning invariance*. A policy is planning invariant if decomposing tasks into simpler subtasks doesn't improve performance. We prove planning invariance can enable horizon generalization. 3/
Certain RL algorithms are more conducive to horizon generalization than others. Goal-conditioned (GCRL) methods with a bilinear critic ฯ(๐ )แตฯ(๐) as well as quasimetric methods better-enable horizon generalization. 2/
Reinforcement learning agents should be able to improve upon behaviors seen during training.
In practice, RL agents often struggle to generalize to new long-horizon behaviors.
Our new paper studies *horizon generalization*, the degree to which RL algorithms generalize to reaching distant goals. 1/
Website: empowering-humans.github.io
Paper: arxiv.org/abs/2411.02623
Many thanks to wonderful collaborators Evan Ellis, Sergey Levine, Benjamin Eysenbach, and Anca Dragan!
Effective empowerment could also be combined with other objectives (e.g., RLHF), to improve assistance and promote safety (prevent human disempowerment). 6/
In principle, this approach provides a general way to align RL agents from human interactions without needing human feedback or other rewards. 5/
We show that optimizing this human effective empowerment helps in assistive settings. Theoretically, we show that maximizing the effective empowerment optimizes an (average-case) lower bound the human's utility/reward/objective under a uninformative prior. 4/
Our recent paper, "Learning to Assist Humans Without Inferring Rewards," proposes a scalable contrastive estimator for human empowerment. The estimator learns successor features to model the effects of a human's actions on the environment, approximating the *effective empowerment*. 3/