π£ Please share: We invite submissions to the 29th International Conference on Artificial Intelligence and Statistics (#AISTATS 2026) and welcome paper submissions at the intersection of AI, machine learning, statistics, and related areas. [1/3]
π£ Please share: We invite submissions to the 29th International Conference on Artificial Intelligence and Statistics (#AISTATS 2026) and welcome paper submissions at the intersection of AI, machine learning, statistics, and related areas. [1/3]
I'm not on TV yet, but I'm on YouTube π talking about research, ML, how I prepare talks and the difference between Bayesian and frequentist statistics.
Many thanks to Charles Riou who already posted many videos of interviews of ML & stats researchers on his YouTube channel "ML New Papers"!! π
1.5 yrs ago, we set out to answer a seemingly simple question: what are we *actually* getting out of RL in fine-tuning? I'm thrilled to share a pearl we found on the deepest dive of my PhD: the value of RL in RLHF seems to come from *generation-verification gaps*. Get ready to π€Ώ:
Am I the only one who feels this is awful? If someone wants to remain anonymous, people should respect that...
SchrΓΆdinger's snack
yes!
Cool work! We recently found that Tsallis q=1.5 (alpha=1.5 in our notation) seems to works really well across several datasets for language modeling arxiv.org/abs/2501.18537 It would be great to find some theoretical justification for why 1.5 seems to be a sweet spot.
π§ββοΈWhy GD converges beyond [step size]<2/[smoothness]? We investigate loss functions and identify their *separation margin* is an important factor. Surprisingly Renyi 2-entropy yields super fast rate T=Ξ©(Ξ΅^{-1/3})!
arxiv.org/abs/2502.04889
Modern post-training is essentially distillation then RL. While reward hacking is well-known and feared, could there be such a thing as teacher hacking? Our latest paper confirms it. Fortunately, we also show how to mitigate it! The secret: diversity and onlineness! arxiv.org/abs/2502.02671
The reason for this is because the usual duality theory still works when working in the spaces of functions and probability measures, while it doesn't if we work in the space of network parameters. We need to apply duality first and then parameterize, not the other way around!
The EBM paper below parameterizes dual variables as neural nets. This idea (which has been used in other contexts such as OT or GANs) is very powerful and may be *the* way duality can be useful for neural nets (or rather, neural nets can be useful for duality!).
Surprisingly, we found that we still obtain good performance even if we use the classical softargmax at inference time and our losses at train time. This means that we can keep the inference code the same and just change the training code, which is useful e.g. for open-weight LMs
We obtain good performance across several language modeling tasks with the alpha-divergence, for alpha=1.5.
The table below summarizes the link between some entropies and f-divergences.
2) We instantiate Fenchel-Young losses with f-divergence regularization. This generalizes the cross-entropy loss in two directions: i) by replacing the KL with f-divergences and ii) by allowing non-uniform prior class weights. Each loss is associated with a f-softargmax operator.
Our approach naturally generalizes to Fenchel-Young losses, allowing us to obtain the first tractable approach for optimizing the sparsemax loss in general combinatorial spaces.
We propose a new joint formulation for learning the EBM and the log-partition, and a MCMC-free doubly stochastic optimization scheme with unbiased gradients.
Pushing this idea a little bit further, we can parameterize the log-partition as a separate neural network. This allows us to evaluate the *learned* log-partition on new data points.
By treating the log-partition not as a quantity to compute but as a variable to optimize, we no longer need it to be exact (in machine learning we never look for exact solutions to optimization problems!).
1) EBMs are generally challenging to train due to the partition function (normalization constant). At first, learning the partition function seems weird O_o But the log-partition exactly coincides with the Lagrange multiplier (dual variable) associated with equality constraints.
Really proud of these two companion papers by our team at GDM:
1) Joint Learning of Energy-based Models and their Partition Function
arxiv.org/abs/2501.18528
2) Loss Functions and Operators Generated by f-Divergences
arxiv.org/abs/2501.18537
A thread.
Sparser, better, faster, stronger
Former French minister of Education and "philosopher" Luc Ferry, who said a few years ago that maths was useless, wrote a book on artificial intelligence π
Huge congrats!
We are organising the First International Conference on Probabilistic Numerics (ProbNum 2025) at EURECOM in southern France in Sep 2025. Topics: AI, ML, Stat, Sim, and Numerics. Reposts very much appreciated!
probnum25.github.io
Slides for a general introduction to the use of Optimal Transport methods in learning, with an emphasis on diffusion models, flow matching, training 2 layers neural networks and deep transformers. speakerdeck.com/gpeyre/optim...
MLSS coming to Senegal !
π AIMS Mbour, Senegal
π
June 23 - July 4, 2025
An international summer school to explore, collaborate, and deepen your understanding of machine learning in a unique and welcoming environment.
Details: mlss-senegal.github.io
But ensuring that your program supports complex numbers throughout could be a bit tedious.
Thrilled to be co-organizing NeurIPS in Paris at @sorbonne-universite.fr next week!
π 100 papers from NeurIPS 2024. Nearly twice as many as in 2023!
π§βπ over 300 registered participants
β
a local and sustainable alternative to flying to Vancouver.
More info: neuripsinparis.github.io/neurips2024p...