J’ai rencontré deux français ce matin au café à Tokyo et après j’ai soudainement parlé avec eux, on a decidé de avoir le dîner la semaine prochaine à Kyoto! Quel chance 😂
J’ai rencontré deux français ce matin au café à Tokyo et après j’ai soudainement parlé avec eux, on a decidé de avoir le dîner la semaine prochaine à Kyoto! Quel chance 😂
Fantastic post by Colin Raffel, "We Are Over-Indexing on Paper Acceptance," drafted in May 2021 (!) but only posted now. The more things change..
Last sentence: "If you want to judge a researcher’s quality, the only meaningful way is to read their papers and judge for yourself."
What's the context of this guy??
To all convex analysis freaks: here's new perspective of flow matching🔭
The denoising operator from the corrupted to the target data is indeed a proximal operator of the Brenier potential!
This viewpoint leads to Lyapunov analysis: FM identifies target support.
arxiv.org/abs/2602.12683
Ah I noticed that in this community there are two ways to use swap regret🤯
Luo et al. (2025) (and the related literature to omnipredictor, I guess) uses swap regret to "swap membership" for multicalibration, which is different from how the standard swap regret operates.
arxiv.org/abs/2505.20885
While haven't verified carefully, it is surprising that the swap regret and the calibration error is equivalent, which can be shown with only the basic properties of the Bregman divergence
arxiv.org/abs/2505.21460
Je ne comprends pas beaucoup non plus, mais peut-être que c’est parce que elle est la première première ministre femme au Japon et donc c’est plus facile de gagner en popularité. D’ailleurs, c’est vrai qu’il n’y a aucune bonne alternative, malheureusement.
Franchement j'était désespéré par le résultat des élections législatives au Japon cette fois-ci, mais je ne peux rien y faire parce que je n'ai pas le droit de vote...
Out of curiosity what kind of stopgrad do you encounter in your context?
(personally stopgrad is interesting because it means some learning phenomenon cannot be necessarily represented by gradient flows)
C’est vrai. Alors je comprends que vous êtes content après que le papier de votre étudiant a été accepté par AISTATS (félicitations encore!)
Peut-être que j‘ai besoin de plus de temps pour essayer et trouver une façon de soutenir les étudiants…
Je ne sais pas guider les étudiants pour qu'ils réussissent leurs recherches… C'est bien que je ne fais que mes propres recherches, mais c'est tellement difficile d'amener les étudiants à réussir leur premier projet.
3rd "Mathematics of Data" Summer School is being held in Singapore in June. Applications for attendance (with accommodation for most & no registration fee for all) are open throughout February and possibly longer: ims.nus.edu.sg/events/ma_da...
🚨Muon can smash anisotropy of inputs!
Our new work investigates learning dynamics of phase retrieval (f(x)=xᵀMx) on the spiked covariance (I+λvvᵀ), for which spectral GD (≒muon) is less affected by the spike direction v than standard GD.
arxiv.org/pdf/2601.22652
Yet obviously this doesn’t mean we don’t need check what we know/create via LLMs—too obviously. This remains the same from ages we were relying on Google scholar, Mathematica, etc. Honestly I really don’t understand why some doesn’t check it carefully. (I know most people are responsible, tho :)
(A bit long; nothing special here)
I think I’m on the relatively optimistic side to embrace academic writing with LLMs, including survey and math proofs—they enable me to new research which I shall never be able to do alone. As a researcher, it’s so much exciting.
I really wish #OpenAI would stop releasing free stochastic parrots in every community they can think of.
They sure look pretty but they are shitting all over the place and you can't have a decent conversation between humans anymore.
Thank you!
This is a lovely method because it only needs literally a few lines of codes to make it work (see this!). The code is already made public here: github.com/levelfour/Br...
And I've been excited to finally collaborate with my longtime friend Yutong, and his smart student Amir!
Delighted to have our "Brenier isotonic regression" accepted at #AISTATS2026! We extend isotonic regression to *multiclass* setup based on Brenier optimal transport, which can be used for multiclass calibration, improving the calibration map nicely over binning etc.
Huge congrats! I'm going to travel there too🏝️
Excited to see people will organize a workshop on calibration in the upcoming AISTATS
calibration-workshop.github.io
Weekend reads:
Rudin et al. (2004) "The Dynamics of AdaBoost: Cyclic Behavior and Convergence of Margins" www.jmlr.org/papers/v5/ru...
Highly illuminating to demonstrate AdaBoost can yield stable limit cycles under some configurations of weak classifiers, and moreover, this dates back to 2004!
Recently I'm getting more and more unsure about how many research projects I'm actively involved in, and it turns out to be 10 in total after writing down! All of them highly excite me equally but the only thing is the limited time 🤯
The list of accepted papers at the Algorithmic Learning Theory Conference, a.k.a. #ALT2026, is out! h/t @thejonullman.bsky.social (PC chair).
algorithmiclearningtheory.org/alt2026/acce...
"ALT: topics so hot, it has to be held in Canada in February"
Tellement d'accord, normalement ça fait longtemps 😅
Merci beaucoup, c'est très amusant d'apprendre des langues!
J'ai réussi A2🥳
Openreview opened the door to continuous and major revisions that nobody has time to check properly.
I think that we should come back to short one pdf page replies to reviews. It would mean having decisions quicker so that we actually have time to work on papers before resubmitting them.
The next #NeurIPS2025 poster this evening on the interaction between loss functions and gradient descent dynamics: See you soon at Poster No.3007😎
Huge thanks for those who dropped by at the poster! Unexpectedly It pleased me to see statisticians and control theory people (who may not focus loss functions that much, of course) impressed by the beautiful structures of convex analysis residing in our paper