Savvy task adaptation via reuse of generative model components through Task Amortized VAE 2/2
Savvy task adaptation via reuse of generative model components through Task Amortized VAE 2/2
Fun paper at @iclr-conf.bsky.social on delicate V1 biases through contextual priors. Inspiring collab w @polacklab.bsky.social, spearheaded by Keith Murray & Balazs Meszena, @juliencorbo.bsky.social. Kudos to conf ACs&PCs for pulling of 19k reviews under openrev turmoil arxiv.org/abs/2602.11956 1/2
Happy to share this: Deep generative model insight into the organization of top-down connections. [Paper](www.nature.com/articles/s41...) @natcomms.nature.com shows native TD in hierarchical deep generative models (as opposed to dominantly FF deep discriminative models) match V2 - V1 influences.
Fresh out of the oven. 3-armed bandit task: A quintessential RL paradigm. Is it every bit an RL task for monkeys? We tracked the arithmetic of population activity in MCC+vlPFC+dlPFC and identified foraging-like computations instead of Q-learning. A fun collaboration with the Procyk Lab (INSERM).
-- 'learning to remember; remembering to learn' is a catchy summary of how semantic and episodic memories interact and yield an adaptive learning system:
Web: go.nature.com/3ZkmRLb
PDF: rdcu.be/epAQ0
I am happy to share the story that just came out at @natrevpsychol.nature.com on a normative treatment of memory dynamics
Fun collaboration with @thecharleywu.bsky.social and @davidnagy.bsky.social
good morning, that's my dawn at bluesky. let's assume I am right in time for the party.