I have added a new tutorial on discrete diffusion models:
github.com/gpeyre/ot4ml
I have added a new tutorial on discrete diffusion models:
github.com/gpeyre/ot4ml
I had a draft about how wild the pace of new generative models wasβ¦ written two months ago. Itβs already outdated. Somehow, things are moving even faster now... (and yes Iβm back to posting about generative models)
Too many REPA / RAE / representation alignment papers lately?
I was lost too, so I wrote a blog post that organizes the space into phases and zooms in on what actually matters for general/molecular ML.
Curious what folks think - link below!
π Blog: kdidi.netlify.app/blog/ml/2025...
My first impression is that it will look like GANs for inverse problems but maybe there is something to do with the training drift term
imo the original paper (arxiv.org/abs/2602.04770) is well written, but there are already some implementations/blog posts about it (github.com/Algomancer/M...)
π³ Discrete drifting models
π³ Riemannian drifting models
π³ Optimal Transport drifting models
π³ Image2image drifting models
π³ Time-dependent drifting models (tricky one)
π³ Adversarial drifting models
π³ Wasserstein drifting models
π³ Variational drifting models
π³ Functional drifting models
Very cool PhD project on generative models for dense detection of rare events in Earth Observation ππ±
Nicolas has been my supervisor for the last 3 years, highly recommend doing a PhD with him!
π’ Fully funded PhD - π Dense Detection of Rare Events in Remote Sensing using Generative Models
Leverage generative models, unsupervised segmentation and explainability techniques to map disasters
w/ @javi-castillo.bsky.social and Flora Weissgerber
Apply ‡οΈ
recrutement.cnes.fr/fr/annonce/4...
Is it a vscode plugin?
Meta Flow Maps enable scalable reward alignment, Peter Potaptchik et al. (arxiv.org/abs/2601.14430)
This article introduces Meta Flow Maps: a stochastic generalization of consistency models (one-step generation) that allows efficient reward steering at inference time or during fine-tuning.
I'm excited to open the new year by sharing a new perspective paper.
I give a informal outline of MD and how it can interact with Generative AI. Then, I discuss how far the field has come since the seminal contributions, such as Boltzmann Generators, and what is still missing
Should we ban Brian Eno from bandcamp?
New blog π: I reflect on why I worked on what I worked on...
I think a PhD is a very special time. You get to challenge yourself, push your boundaries, and grow. My thoughts go against the current AI/academia narrative online, so I hope you find it interesting.
chaitjo.substack.com/p/phd-thesis...
Yes estimating distance between distributions with single sample sounds irrelevant. I wonder if flow-based artefacts are sufficiently similar across models with the same FID, allowing us to learn the score predictive model. I may try later!
Agree! I wonder if some generation artefacts are signatures that allow to predict the FID score (suppose that they are present in almost all generated images by a given model)
You may add the real test (or training π) dataset if you are into leaderboard chasing
1. Select many diffusion/flow-matching models
2. Generate 50k images per model
3. Use FID of each set as a label
4. Train a model to predict FID from a single image
Whatβs the probability this actually works, gives a cheap proxy for FID and enable fast generative model prototyping?
We introduce epiplexity, a new measure of information that provides a foundation for how to select, generate, or transform data for learning systems. We have been working on this for almost 2 years, and I cannot contain my excitement! arxiv.org/abs/2601.03220 1/7
π We put together with Mike Davies a review of self-supervised learning for inverse problems, covering the main approaches in the literature with a unified notation and analysis.
arxiv.org/abs/2601.03244
Can we train neural networks just with permutations of their initial weights? And then whats the best initialisation distribution ?
PS: We also recently released a unified codebase for discrete diffusion, check it out!
π Thread : x.com/nkalyanv99/...
π GitHub: github.com/nkalyanv99/...
π Docs: nkalyanv99.github.io/UNI-D2/
π βFoundations of Diffusion Models in General State Spaces: A Self-Contained Introductionβ
Huge thanks to Tobias Hoppe, @k-neklyudov.bsky.social,
@alextong.bsky.social, Stefan Bauer and @andreadittadi.bsky.social for their supervision! π
arxiv : arxiv.org/abs/2512.05092 π§΅π
I use drum mic kits for punchier presentations
Then Iβm counting on the sound engineer to engage
"Improved Mean Flows: On the Challenges of Fastforward Generative Models" arxiv.org/abs/2512.02012 questions this approximation and proposes a new training process for mean flows.
Hi @ google, can you provide 100k TPU hours to explore the design space of diffusion bridges for image-to-image translation? x1 vs drift pred, architectures and # params, # dataset, scaling couplings and batch sizes (for minibatch-based couplings). I can run everything in jax in return...
Yesterday, @nicolasdufour.bsky.social defended is PhD. I really enjoyed the years of collaboration w/ @vickykalogeiton.bsky.social (& @loicland.bsky.social)
Video: youtube.com/live/DXQ7FZA...
Big thanks to the jury @dlarlus.bsky.social @ptrkprz.bsky.social @gtolias.bsky.social A. Efros & T. Karras
@climateainordics.com is now on youtube! Check out some amazing talks on how to help fight climate change using AI!
youtube.com/@climateaino...
@neuripsconf.bsky.social is two weeks away!
π’ Stop missing great workshop speakers just because the workshop wasnβt on your radar. Browse them all in one place:
robinhesse.github.io/workshop_spe...
(also available for @euripsconf.bsky.social)
#NeurIPS #EurIPS
Calling it for today... I tried using the gemini 3 Pro preview to build some js animations, and it went well