Tam Le's Avatar

Tam Le

@ntamle

Assistant professor at Université Paris Cité - LPSM. Working on optimization and machine learning.

32
Followers
32
Following
9
Posts
23.11.2024
Joined
Posts Following

Latest posts by Tam Le @ntamle

I guess sometimes assumptions are not just about being realistic but rather help to gain insights and to understand

21.10.2025 10:12 👍 1 🔁 0 💬 1 📌 0

I will be at ICCOPT (USC Los Angeles) next week to present this work. This will be on Wednesday July 23, alongside other nice talks on first-order methods: heavy ball ODE, optimal smoothing and nonsmoothness.

19.07.2025 10:49 👍 1 🔁 0 💬 0 📌 0

We studied both continuous time and discretized dynamics. The paper also contains other results, on complexity in the convex case, on limit of limit points for discretized set-valued dynamics ...

05.06.2025 06:24 👍 1 🔁 0 💬 0 📌 0
Post image

For instance, if a critical point is flat, it may be more sensible to errors, since the vanishing gradient cannot compensate the perturbations. We thus obtain an estimate (rho) of the fluctuations around the critical set, depending on the coefficients theta and beta.

05.06.2025 06:23 👍 2 🔁 0 💬 1 📌 0
Post image

The idea of the analysis was to quantify how much critical points are flat or sharp. So we relied on KL inequality and a metric subregularity condition. They are satisfied for a large class of functions called "definable" or semialgebraic ones (say, piecewise polynomial).

05.06.2025 06:19 👍 1 🔁 0 💬 1 📌 0

🎉🎉🎉Our paper "Inexact subgradient methods for semialgebraic
functions" is accepted at Mathematical Programming !! This is a joint work with Jerome Bolte, Eric Moulines and Edouard Pauwels where we study a subgradient method with errors for nonconvex nonsmooth functions.

arxiv.org/pdf/2404.19517

05.06.2025 06:13 👍 8 🔁 3 💬 3 📌 0

If it went down, then it must be a definable function, and I know you used a conservative gradient.

06.05.2025 15:31 👍 1 🔁 0 💬 0 📌 0

I find Coste definitely more accessible to learn the topic, but when it comes to find/cite a specific property I prefer Van den dries!

29.03.2025 11:49 👍 2 🔁 0 💬 0 📌 0
Preview
Universal generalization guarantees for Wasserstein distributionally robust models Distributionally robust optimization has emerged as an attractive way to train robust machine learning models, capturing data uncertainty and distribution shifts. Recent statistical analyses have prov...

Our paper "Universal generalization guarantees for Wasserstein distributionally robust models" with Jérôme Malick is accepted at ICLR 2025!!!! I'm so happy about this one, we really improved the presentation since the first submission. arxiv.org/abs/2402.11981

22.01.2025 17:42 👍 5 🔁 0 💬 1 📌 0