Han Bao's Avatar

Han Bao

@han-b

Associate Professor@The Institute of Statistical Mathematics, working in ML theory https://hermite.jp/

75
Followers
41
Following
135
Posts
15.12.2024
Joined
Posts Following

Latest posts by Han Bao @han-b

J’ai rencontré deux français ce matin au café à Tokyo et après j’ai soudainement parlé avec eux, on a decidé de avoir le dîner la semaine prochaine à Kyoto! Quel chance 😂

04.03.2026 22:52 👍 2 🔁 0 💬 0 📌 0
Post image

Fantastic post by Colin Raffel, "We Are Over-Indexing on Paper Acceptance," drafted in May 2021 (!) but only posted now. The more things change..

Last sentence: "If you want to judge a researcher’s quality, the only meaningful way is to read their papers and judge for yourself."

24.02.2026 14:38 👍 34 🔁 8 💬 3 📌 0

What's the context of this guy??

23.02.2026 13:54 👍 1 🔁 0 💬 1 📌 0
Post image Post image

To all convex analysis freaks: here's new perspective of flow matching🔭

The denoising operator from the corrupted to the target data is indeed a proximal operator of the Brenier potential!
This viewpoint leads to Lyapunov analysis: FM identifies target support.
arxiv.org/abs/2602.12683

17.02.2026 07:32 👍 15 🔁 2 💬 0 📌 0
Preview
Improved Bounds for Swap Multicalibration and Swap Omniprediction In this paper, we consider the related problems of multicalibration -- a multigroup fairness notion and omniprediction -- a simultaneous loss minimization paradigm, both in the distributional and onli...

Ah I noticed that in this community there are two ways to use swap regret🤯

Luo et al. (2025) (and the related literature to omnipredictor, I guess) uses swap regret to "swap membership" for multicalibration, which is different from how the standard swap regret operates.
arxiv.org/abs/2505.20885

16.02.2026 00:56 👍 1 🔁 0 💬 0 📌 0
Post image

While haven't verified carefully, it is surprising that the swap regret and the calibration error is equivalent, which can be shown with only the basic properties of the Bregman divergence

arxiv.org/abs/2505.21460

14.02.2026 00:25 👍 5 🔁 0 💬 1 📌 0

Je ne comprends pas beaucoup non plus, mais peut-être que c’est parce que elle est la première première ministre femme au Japon et donc c’est plus facile de gagner en popularité. D’ailleurs, c’est vrai qu’il n’y a aucune bonne alternative, malheureusement.

09.02.2026 12:26 👍 1 🔁 0 💬 0 📌 0

Franchement j'était désespéré par le résultat des élections législatives au Japon cette fois-ci, mais je ne peux rien y faire parce que je n'ai pas le droit de vote...

08.02.2026 22:55 👍 1 🔁 1 💬 1 📌 0

Out of curiosity what kind of stopgrad do you encounter in your context?
(personally stopgrad is interesting because it means some learning phenomenon cannot be necessarily represented by gradient flows)

08.02.2026 11:08 👍 1 🔁 0 💬 1 📌 0

C’est vrai. Alors je comprends que vous êtes content après que le papier de votre étudiant a été accepté par AISTATS (félicitations encore!)

Peut-être que j‘ai besoin de plus de temps pour essayer et trouver une façon de soutenir les étudiants…

06.02.2026 13:13 👍 1 🔁 0 💬 0 📌 0

Je ne sais pas guider les étudiants pour qu'ils réussissent leurs recherches… C'est bien que je ne fais que mes propres recherches, mais c'est tellement difficile d'amener les étudiants à réussir leur premier projet.

05.02.2026 12:11 👍 2 🔁 0 💬 1 📌 0
Post image

3rd "Mathematics of Data" Summer School is being held in Singapore in June. Applications for attendance (with accommodation for most & no registration fee for all) are open throughout February and possibly longer: ims.nus.edu.sg/events/ma_da...

04.02.2026 06:45 👍 6 🔁 6 💬 0 📌 1
Post image

🚨Muon can smash anisotropy of inputs!

Our new work investigates learning dynamics of phase retrieval (f(x)=xᵀMx) on the spiked covariance (I+λvvᵀ), for which spectral GD (≒muon) is less affected by the spike direction v than standard GD.

arxiv.org/pdf/2601.22652

02.02.2026 06:52 👍 6 🔁 1 💬 0 📌 0

Yet obviously this doesn’t mean we don’t need check what we know/create via LLMs—too obviously. This remains the same from ages we were relying on Google scholar, Mathematica, etc. Honestly I really don’t understand why some doesn’t check it carefully. (I know most people are responsible, tho :)

30.01.2026 01:54 👍 0 🔁 0 💬 0 📌 0

(A bit long; nothing special here)

I think I’m on the relatively optimistic side to embrace academic writing with LLMs, including survey and math proofs—they enable me to new research which I shall never be able to do alone. As a researcher, it’s so much exciting.

30.01.2026 01:54 👍 1 🔁 0 💬 1 📌 0

I really wish #OpenAI would stop releasing free stochastic parrots in every community they can think of.
They sure look pretty but they are shitting all over the place and you can't have a decent conversation between humans anymore.

27.01.2026 23:13 👍 125 🔁 29 💬 2 📌 0

Thank you!

23.01.2026 06:17 👍 1 🔁 0 💬 0 📌 0
Post image

This is a lovely method because it only needs literally a few lines of codes to make it work (see this!). The code is already made public here: github.com/levelfour/Br...

And I've been excited to finally collaborate with my longtime friend Yutong, and his smart student Amir!

22.01.2026 23:11 👍 0 🔁 0 💬 0 📌 0
Post image Post image

Delighted to have our "Brenier isotonic regression" accepted at #AISTATS2026! We extend isotonic regression to *multiclass* setup based on Brenier optimal transport, which can be used for multiclass calibration, improving the calibration map nicely over binning etc.

22.01.2026 23:11 👍 10 🔁 0 💬 2 📌 0

Huge congrats! I'm going to travel there too🏝️

22.01.2026 13:46 👍 1 🔁 0 💬 1 📌 0
Calibration workshop

Excited to see people will organize a workshop on calibration in the upcoming AISTATS
calibration-workshop.github.io

12.01.2026 23:15 👍 6 🔁 0 💬 0 📌 0
Post image

Weekend reads:
Rudin et al. (2004) "The Dynamics of AdaBoost: Cyclic Behavior and Convergence of Margins" www.jmlr.org/papers/v5/ru...

Highly illuminating to demonstrate AdaBoost can yield stable limit cycles under some configurations of weak classifiers, and moreover, this dates back to 2004!

11.01.2026 00:27 👍 11 🔁 1 💬 0 📌 0

Recently I'm getting more and more unsure about how many research projects I'm actively involved in, and it turns out to be 10 in total after writing down! All of them highly excite me equally but the only thing is the limited time 🤯

22.12.2025 23:39 👍 5 🔁 0 💬 0 📌 0
Accepted Papers | ALT 2026

The list of accepted papers at the Algorithmic Learning Theory Conference, a.k.a. #ALT2026, is out! h/t @thejonullman.bsky.social (PC chair).

algorithmiclearningtheory.org/alt2026/acce...

"ALT: topics so hot, it has to be held in Canada in February"

21.12.2025 22:24 👍 21 🔁 5 💬 0 📌 0

Tellement d'accord, normalement ça fait longtemps 😅

16.12.2025 12:05 👍 1 🔁 0 💬 0 📌 0

Merci beaucoup, c'est très amusant d'apprendre des langues!

16.12.2025 08:29 👍 1 🔁 0 💬 1 📌 0
Post image

J'ai réussi A2🥳

16.12.2025 07:28 👍 7 🔁 0 💬 1 📌 0

Openreview opened the door to continuous and major revisions that nobody has time to check properly.
I think that we should come back to short one pdf page replies to reviews. It would mean having decisions quicker so that we actually have time to work on papers before resubmitting them.

12.12.2025 06:55 👍 19 🔁 7 💬 1 📌 0

The next #NeurIPS2025 poster this evening on the interaction between loss functions and gradient descent dynamics: See you soon at Poster No.3007😎

04.12.2025 17:19 👍 2 🔁 1 💬 0 📌 0

Huge thanks for those who dropped by at the poster! Unexpectedly It pleased me to see statisticians and control theory people (who may not focus loss functions that much, of course) impressed by the beautiful structures of convex analysis residing in our paper

04.12.2025 17:17 👍 1 🔁 0 💬 0 📌 0