Yann Traonmilin's Avatar

Yann Traonmilin

@ytraonmilin

CNRS researcher at Institut de mathématiques de Bordeaux. yanntraonmilin.perso.math.cnrs.fr

42
Followers
78
Following
49
Posts
23.01.2025
Joined
Posts Following

Latest posts by Yann Traonmilin @ytraonmilin

a few collegues so I thought I might as well formalize it and put it online.

02.03.2026 15:35 👍 0 🔁 0 💬 0 📌 0
A note on the convergence of RED algorithms under minimal hypotheses and open questions In this note, we give a convergence result for a modified "regularization-by-denoising"(RED) algorithm under a restricted isometry condition on measurements and a restricted Lipschitz condition on the considered deep projective prior. This study leads to open questions about the convergence of RED algorithms.

We uploaded "A note on the convergence of RED algorithms under minimal hypotheses and open questions" hal.science/hal-05528679 . There is a little result in relation to my latest works with deep projective priors that brings some questions. I found myself talking about this with

02.03.2026 15:35 👍 0 🔁 0 💬 1 📌 0
Calliopé

We are recruiting four positions connected to Machine Learning, Statistical Learning, and AI for Science in the Applied Mathematics department at École polytechnique. Join our vibrant community at IP Paris and Hi! Paris IA center. List below🧵 tinyurl.com/3jpw9t26

06.02.2026 07:56 👍 11 🔁 19 💬 1 📌 0

There is an Associate Professor position in CS at ENS Lyon, with potential integration in my team, starting in sept 2026: DM me in interested!
Details at www.ens-lyon.fr/LIP/images/P...

05.02.2026 09:04 👍 6 🔁 9 💬 0 📌 1

"ChatGPT (or other genAI chatbot) is not a scientist or colleague with whom you can sound ideas and get critique and improve your research."

I am not sure this is true with latest developments, what is your take on latest LLM-assisted proofs of math open problems?

23.01.2026 08:29 👍 0 🔁 0 💬 1 📌 0

and stability of projected gradient descent through a normalized idem-potent regularization of the prior during training . We show that such regularization improves stability of iterates.

23.01.2026 08:05 👍 0 🔁 0 💬 0 📌 0

(really like the smaller more dedicated format of this conference !)
In this paper, we explore several stability trade-offs for (deep) projective priors with links to sparse recovery theory. We explore robustness to outliers

23.01.2026 08:05 👍 0 🔁 0 💬 1 📌 0
Post image

Good week ! Our paper with A. Joundi and J.-F. Aujol "From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent" hal.science/hal-05401157v1 has been accepted to conference on Parsimony and Learning

23.01.2026 08:05 👍 2 🔁 0 💬 1 📌 0

as projected gradient descent with time varying projections.

22.01.2026 13:26 👍 1 🔁 0 💬 0 📌 0

...we give explicit identifiability and convergence guarantees for solving inverse problems with diffusion priors in a deterministic setting. To do this, we link score functions with projections on target model sets and interpret the implicit prior algorithm...

22.01.2026 13:26 👍 1 🔁 0 💬 1 📌 0
Preview
A Recovery Theory for Diffusion Priors: Deterministic Analysis of the Implicit Prior Algorithm Recovering high-dimensional signals from corrupted measurements is a central challenge in inverse problems. Recent advances in generative diffusion models have shown remarkable empirical success in pr...

Our work with O. Leong "A Recovery Theory for Diffusion Priors: Deterministic Analysis of the Implicit Prior Algorithm" has been accepted to AISTATS2026.

arxiv.org/abs/2509.20511

In this work,...

22.01.2026 13:26 👍 1 🔁 0 💬 1 📌 0
Workshop sur les mathématiques de l'IA - Sciencesconf.org Le workshop se déroulera sous forme de présentations sur appel à contribution (résumé d'une page). Une session poster sera organisée au besoin selon le nombre de contributions.

We organise a workshop on the maths of AI in Bordeaux November 4th-6th. Don't hesitate ! The more, the merrier! (within the capacity of the room :) )

Dates & infos :
wmathsia2026.sciencesconf.org

19.01.2026 08:29 👍 2 🔁 2 💬 0 📌 0
Workshop sur les mathématiques de l'IA - Sciencesconf.org Le workshop se déroulera sous forme de présentations sur appel à contribution (résumé d'une page). Une session poster sera organisée au besoin selon le nombre de contributions.

Save the date : on organise un workshop sur les maths de l'IA à Bordeaux les 4-6 novembre. N'hésitez pas ! plus on est de fous plus on rit ! (dans les limites des capacités d'accueil haha)

Dates & infos :
wmathsia2026.sciencesconf.org

19.01.2026 08:29 👍 13 🔁 6 💬 1 📌 0
Post image

Les inscriptions aux journées MODE 2026 à Nice sont désormais ouvertes. Elles se dérouleront du 18 au 20 mars à l'Hôtel Saint-Paul.

Les inscriptions sont ouvertes jusqu'au 1 mars (majoration > 9/02). La deadline pour soumettre une communication est le **15 janvier**.

15.12.2025 08:11 👍 5 🔁 7 💬 1 📌 1

In approaches using deep projective priors, we link a key geometrical attribute, the "orthogonality of the projection" with identifiability and convergence rate.
Using this attribute to regularize the learning of such priors improves stability and robustness for ill posed imaging problems.

09.12.2025 15:55 👍 0 🔁 0 💬 0 📌 0
Post image

New (updated) preprint: Stochastic Orthogonal Regularization for deep projective priors, A. Joundi, YT, A. Newson. hal.science/hal-05069394v2

Geometry x deep priors x inverse problems = 👍

09.12.2025 15:55 👍 1 🔁 0 💬 1 📌 0
Preview
MALAGA: Reinventing the Theory of Machine Learning on Large Graphs (ERC StG)

I have several offers for Master internships / PhDs on graph ML funded by ERC MALAGA for 2026. Don't hesitate to contact me to apply!

All infos here: nkeriven.github.io/malaga/

06.11.2025 13:56 👍 8 🔁 8 💬 0 📌 1

Fun friday night problem: consider a random variable X with distribution p and a random variable Y such that X+Y has distribution p, what can you say about the distribution of Y ?

Interested, if anyone has a reference to a complete solution to this.

24.10.2025 15:09 👍 0 🔁 0 💬 1 📌 0

Pour être tout à fait honnête, "se dessine avec une règle" est dans la leçon. Je ne sais pas si la règle utilise l'axiome du choix par contre

16.10.2025 08:18 👍 1 🔁 0 💬 0 📌 0

Math in elementary school, definitions:
aligned points: points lying on a line
line: an infinity of aligned points

points alignés : points appartenant à la même droite
droite: infinité de points alignés

🤔 I'm still stuck in a loop 🤣

What would be a good definition of line for 7 y.o. ?

16.10.2025 07:13 👍 2 🔁 0 💬 2 📌 0
Journées SMAI-MODE 2026 - Sciencesconf.org Du 18 au 20 mars 2025, l'Université Côte d'Azur accueille les journées SMAI-MODE, la conférence biennale du groupe MODE de la Société de Mathématiques Appliquées et Industrielles (SMAI).

Les journées SMAI-MODE 2026 auront lieu à Nice du 18 au 20 mars 2026, précédées par un mini-cours les 16/17.

Vous pouvez déposer votre proposition de contrib sur mode2026.sciencesconf.org

Inscription à partir du 1er décembre pour finir le 1 mars (avec un tarif majoré à compter du 1 février).

14.10.2025 11:28 👍 4 🔁 4 💬 1 📌 0

a good basis to understand what is at play when we try to improve PTQ, by e.g. cross layer equalization or adaptive quantization.

10.10.2025 07:37 👍 0 🔁 0 💬 0 📌 0
Post image

Preprint: improved deterministic error bounds for post-training quantization for CNN architectures.
"On the impact of the parametrization of deep
convolutional neural networks on post-training
quantization", Samy Houache, J.-F. Aujol, Y.T.

hal.science/hal-04922698/

10.10.2025 07:37 👍 2 🔁 0 💬 1 📌 0

However, it is always possible to try to study algorihms as tools to minimize a recovery error to try to bypass this 2-step process (which I have tried to do lately).

I wonder if such approach is possible for the pure learning problem.

08.10.2025 07:20 👍 0 🔁 0 💬 0 📌 0

So it makes sense to study optimization of the loss.

There is a parrallel in inverse problems, set up a function to minimize, guarantee convergence AND that minimizers identify the righ objects (that last part being often overlooked).

08.10.2025 07:20 👍 1 🔁 0 💬 1 📌 0

"But more importantly, we don’t care if you can find a global minimum of the training error. We care if you can find a global minimum of the test error."

I agree but to be fair to optimization folks, this is generally done by setting up a loss that should guarantee low test error.

08.10.2025 07:20 👍 0 🔁 0 💬 1 📌 0
Preview
Vacancy — PhD Position on Mathematical Foundations for Explainable AI Are you highly motivated to do PhD research in mathematical machine learning, with special emphasis on mathematical foundations for explainable AI? If yes, the Korteweg-de Vries Institute for Mathemat...

I have 2 open PhD positions on Mathematical Foundations for Explainable AI:

Position 1: werkenbij.uva.nl/en/vacancies... (apply by October 13, 2025)

Position 2: applications via the Ellis PhD Program: ellis.eu/news/ellis-p... by Oct. 31.

Both positions are equivalent (except for starting dates)

07.10.2025 11:48 👍 7 🔁 5 💬 0 📌 0
Preview
Universal Cascades A little approximation theory goes a long way

I wrote what are likely some very outdated thoughts on approximation theory. All of this was known in 1990. Which new perspectives am I missing?

30.09.2025 14:31 👍 4 🔁 1 💬 0 📌 0
Preview
Why science needs outsiders - Works in Progress Magazine Science has forgotten that the greatest breakthroughs often come from outsiders who are able to take a fresh perspective.

worksinprogress.co/issue/why-sc...

22.09.2025 07:42 👍 0 🔁 0 💬 0 📌 0
Preview
Stéphane Mallat, a pioneer bridging mathematics and computer science

#CNRSnews 🗞️ The CNRS mathematician and computer scientist Stéphane Mallat receives the CNRS 2025 Gold Medal for his achievements in applied maths and signal processing, including the JPEG 2000 image compression standard and mathematical foundations of AI.

11.09.2025 12:02 👍 41 🔁 13 💬 0 📌 1