Almost 5 years in the making... "Hyperparameter Optimization in Machine Learning" is finally out! π
We designed this monograph to be self-contained, covering: Grid, Random & Quasi-random search, Bayesian & Multi-fidelity optimization, Gradient-based methods, Meta-learning.
arxiv.org/abs/2410.22854
17.12.2025 09:54
π 13
π 8
π¬ 0
π 0
π While we wait for the OpenReview drama to settle, here is something that actually solves problems. π
A definitive guide to HPO from my lab mates. Don't let your hyperparameters be a mystery (unlike your reviewers).
#MachineLearning #HPO
28.11.2025 17:35
π 2
π 0
π¬ 0
π 0
If youβre curious about the intersection of statistical learning theory, sampling-based optimization, generalization in deep learning, and PAC-Bayesian analysis, check out our paper.Weβd love to hear your thoughts, feedback, or questions. If you spot interesting connections to your work, letβs chat!
14.11.2025 14:11
π 5
π 0
π¬ 0
π 0
π± A second, equally striking factor: by applying a single scalar calibration factor computed from the data, the resulting upper bounds become not only tighter for true labels but also better aligned with the test error curve.
14.11.2025 14:11
π 2
π 0
π¬ 1
π 0
π One surprising insight: Generalization in the under-regularized low-temperature regime (Ξ² > n) is already signaled by small training errors in the over-regularized high-temperature regime.
14.11.2025 14:11
π 1
π 0
π¬ 1
π 0
Empirical results on MNIST and CIFAR-10 show:
1) Non-trivial upper bounds on test error for both true and random labels
2) Meaningful distinction between structure-rich and structure-poor datasets
The figures: Binary classification with FCNNs using SGLD using 8k MNIST images
14.11.2025 14:11
π 1
π 0
π¬ 1
π 0
We show that it can be effectively approximated via Langevin Monte Carlo (LMC) algorithms, such as Stochastic Gradient Langevin Dynamics (SGLD), and crucially,
π Our bounds remain stable under this approximation (in both total variation and Wβ distance).
14.11.2025 14:11
π 1
π 0
π¬ 1
π 0
Then comes our first contribution:
β
We derive high-probability, data-dependent bounds on the test error for hypotheses sampled from the Gibbs posterior (for the first time in the low-temperature regime Ξ² > n).
Sampling from the Gibbs posterior is, however, typically difficult.
14.11.2025 14:11
π 2
π 0
π¬ 1
π 0
This leads naturally to the Gibbs posterior, which assigns higher probabilities to hypotheses with smaller training errors (exponentially decaying with loss).
14.11.2025 14:11
π 1
π 0
π¬ 1
π 0
To probe this question, we turn to randomized predictors rather than deterministic ones.
Here, predictors are sampled from a prescribed probability distribution, allowing us to apply PAC-Bayesian theory to study their generalization properties.
14.11.2025 14:11
π 1
π 0
π¬ 1
π 0
In the figure below from the famous paper, the same model achieves nearly zero training error on both random and true labels. Therefore, the key to generalization must lie within the structure of the data itself.
arxiv.org/abs/1611.03530
14.11.2025 14:11
π 1
π 0
π¬ 1
π 0
Generalization of Gibbs and Langevin Monte Carlo Algorithms in the Interpolation Regime
The paper provides data-dependent bounds on the test error of the Gibbs algorithm in the overparameterized interpolation regime, where low training errors are also obtained for impossible data, such a...
π§΅Thermodynamics Reveals the Generalization in the Interpolation Regime
In the realm of overparameterized NNs, one can achieve almost zero training error on any data, even random labels, that yield massive test errors.
So, how can we tell when such a model truly generalizes?
arxiv.org/abs/2510.06028
14.11.2025 14:11
π 6
π 0
π¬ 1
π 0
π’ Upcoming Talk at Our Lab
Weβre excited to host Arthur Bizzi from EPFL for a research talk next week!
Title: Towards Neural Kolmogorov Equations: Parallelizable SDE Learning with Neural PDEs
π Date: November 19
β° Time: 16:00 CET
π Galileo Sala, CHT @iitalk.bsky.social
14.11.2025 14:03
π 5
π 2
π¬ 1
π 0
This paves the way for more data-dependent generalization guarantees in dependent-data settings.
02.05.2025 18:35
π 1
π 0
π¬ 0
π 0
Technique highlights:
πΉ Uses blocking methods
πΉ Captures fast-decaying correlations
πΉ Results in tight O(1/n) bounds when decorrelation is fast
Applications:
π Covariance operator estimation
π Learning transfer operators for stochastic processes
02.05.2025 18:35
π 1
π 0
π¬ 1
π 0
Our contribution:
We propose empirical Bernstein-type concentration bounds for Hilbert space-valued random variables arising from mixing processes.
π§ Works for both stationary and non-stationary sequences.
02.05.2025 18:35
π 1
π 0
π¬ 1
π 0
Challenge:
Standard i.i.d. assumptions fail in many learning tasks, especially those involving trajectory data (e.g., molecular dynamics, climate models).
π Temporal dependence and slow mixing make it hard to get sharp generalization bounds.
02.05.2025 18:35
π 1
π 0
π¬ 1
π 0
π¨ Poster at #AISTATS2025 tomorrow!
πPoster Session 1 #125
We present a new empirical Bernstein inequality for Hilbert space-valued random processesβrelevant for dependent, even non-stationary data.
w/ Andreas Maurer, @vladimir-slk.bsky.social & M. Pontil
π Paper: openreview.net/forum?id=a0E...
02.05.2025 18:35
π 3
π 0
π¬ 1
π 1
1/ π Over the past two years, our team, CSML, at IIT, has made significant strides in the data-driven modeling of dynamical systems. Curious about how we use advanced operator-based techniques to tackle real-world challenges? Letβs dive in! π§΅π
15.01.2025 14:34
π 5
π 3
π¬ 1
π 0
An inspiring dive into understanding dynamical processes through 'The Operator Way.' A fascinating approach made accessible for everyoneβcheck it out! ππ
15.01.2025 10:31
π 4
π 1
π¬ 0
π 0
Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues
Linear Recurrent Neural Networks (LRNNs) such as Mamba, RWKV, GLA, mLSTM, and DeltaNet have emerged as efficient alternatives to Transformers in large language modeling, offering linear scaling withβ¦
Excited to present
"Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues"
at the M3L workshop at #NeurIPS
https://buff.ly/3BlcD4y
If interested, you can attend the presentation the 14th at 15:00, pass at the afternoon poster session, or DM me to discuss :)
10.12.2024 22:52
π 9
π 3
π¬ 0
π 0
In his book βThe Nature of Statistical Learningβ V. Vapnik wrote:
βWhen solving a given problem, try to avoid a more general problem as an intermediate stepβ
12.12.2024 17:19
π 8
π 3
π¬ 1
π 0
Excited to share our lab's amazing contributions at NeurIPS this year! Check out our papers and stay inspired! ππ #NeurIPS2024
10.12.2024 06:18
π 3
π 0
π¬ 0
π 0
Could add me to the list?
04.12.2024 22:29
π 0
π 0
π¬ 0
π 0
Hi Gaspard. I wonder what you are currently working on in regard to sequence models and world models. Since I have similar interests as you, and in the lab, we had worked on the intersection of the topics (bsky.app/profile/marc...).
27.11.2024 14:43
π 2
π 0
π¬ 1
π 0
Hi π We're glad to be here on @bsky.app and looking forward to engaging in this community. But first, learn a little more about us...
#ELLISforEurope #AI #ML #CrossBorderCollab #PhD
21.11.2024 10:37
π 121
π 18
π¬ 3
π 1