a few collegues so I thought I might as well formalize it and put it online.
a few collegues so I thought I might as well formalize it and put it online.
We uploaded "A note on the convergence of RED algorithms under minimal hypotheses and open questions" hal.science/hal-05528679 . There is a little result in relation to my latest works with deep projective priors that brings some questions. I found myself talking about this with
We are recruiting four positions connected to Machine Learning, Statistical Learning, and AI for Science in the Applied Mathematics department at École polytechnique. Join our vibrant community at IP Paris and Hi! Paris IA center. List below🧵 tinyurl.com/3jpw9t26
There is an Associate Professor position in CS at ENS Lyon, with potential integration in my team, starting in sept 2026: DM me in interested!
Details at www.ens-lyon.fr/LIP/images/P...
"ChatGPT (or other genAI chatbot) is not a scientist or colleague with whom you can sound ideas and get critique and improve your research."
I am not sure this is true with latest developments, what is your take on latest LLM-assisted proofs of math open problems?
and stability of projected gradient descent through a normalized idem-potent regularization of the prior during training . We show that such regularization improves stability of iterates.
(really like the smaller more dedicated format of this conference !)
In this paper, we explore several stability trade-offs for (deep) projective priors with links to sparse recovery theory. We explore robustness to outliers
↓
Good week ! Our paper with A. Joundi and J.-F. Aujol "From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent" hal.science/hal-05401157v1 has been accepted to conference on Parsimony and Learning
↓
as projected gradient descent with time varying projections.
...we give explicit identifiability and convergence guarantees for solving inverse problems with diffusion priors in a deterministic setting. To do this, we link score functions with projections on target model sets and interpret the implicit prior algorithm...
↓
Our work with O. Leong "A Recovery Theory for Diffusion Priors: Deterministic Analysis of the Implicit Prior Algorithm" has been accepted to AISTATS2026.
arxiv.org/abs/2509.20511
In this work,...
↓
We organise a workshop on the maths of AI in Bordeaux November 4th-6th. Don't hesitate ! The more, the merrier! (within the capacity of the room :) )
Dates & infos :
wmathsia2026.sciencesconf.org
Save the date : on organise un workshop sur les maths de l'IA à Bordeaux les 4-6 novembre. N'hésitez pas ! plus on est de fous plus on rit ! (dans les limites des capacités d'accueil haha)
Dates & infos :
wmathsia2026.sciencesconf.org
Les inscriptions aux journées MODE 2026 à Nice sont désormais ouvertes. Elles se dérouleront du 18 au 20 mars à l'Hôtel Saint-Paul.
Les inscriptions sont ouvertes jusqu'au 1 mars (majoration > 9/02). La deadline pour soumettre une communication est le **15 janvier**.
In approaches using deep projective priors, we link a key geometrical attribute, the "orthogonality of the projection" with identifiability and convergence rate.
Using this attribute to regularize the learning of such priors improves stability and robustness for ill posed imaging problems.
New (updated) preprint: Stochastic Orthogonal Regularization for deep projective priors, A. Joundi, YT, A. Newson. hal.science/hal-05069394v2
Geometry x deep priors x inverse problems = 👍
↓
I have several offers for Master internships / PhDs on graph ML funded by ERC MALAGA for 2026. Don't hesitate to contact me to apply!
All infos here: nkeriven.github.io/malaga/
Fun friday night problem: consider a random variable X with distribution p and a random variable Y such that X+Y has distribution p, what can you say about the distribution of Y ?
Interested, if anyone has a reference to a complete solution to this.
Pour être tout à fait honnête, "se dessine avec une règle" est dans la leçon. Je ne sais pas si la règle utilise l'axiome du choix par contre
Math in elementary school, definitions:
aligned points: points lying on a line
line: an infinity of aligned points
points alignés : points appartenant à la même droite
droite: infinité de points alignés
🤔 I'm still stuck in a loop 🤣
What would be a good definition of line for 7 y.o. ?
Les journées SMAI-MODE 2026 auront lieu à Nice du 18 au 20 mars 2026, précédées par un mini-cours les 16/17.
Vous pouvez déposer votre proposition de contrib sur mode2026.sciencesconf.org
Inscription à partir du 1er décembre pour finir le 1 mars (avec un tarif majoré à compter du 1 février).
a good basis to understand what is at play when we try to improve PTQ, by e.g. cross layer equalization or adaptive quantization.
Preprint: improved deterministic error bounds for post-training quantization for CNN architectures.
"On the impact of the parametrization of deep
convolutional neural networks on post-training
quantization", Samy Houache, J.-F. Aujol, Y.T.
hal.science/hal-04922698/
↓
However, it is always possible to try to study algorihms as tools to minimize a recovery error to try to bypass this 2-step process (which I have tried to do lately).
I wonder if such approach is possible for the pure learning problem.
So it makes sense to study optimization of the loss.
There is a parrallel in inverse problems, set up a function to minimize, guarantee convergence AND that minimizers identify the righ objects (that last part being often overlooked).
↓
"But more importantly, we don’t care if you can find a global minimum of the training error. We care if you can find a global minimum of the test error."
I agree but to be fair to optimization folks, this is generally done by setting up a loss that should guarantee low test error.
↓
I have 2 open PhD positions on Mathematical Foundations for Explainable AI:
Position 1: werkenbij.uva.nl/en/vacancies... (apply by October 13, 2025)
Position 2: applications via the Ellis PhD Program: ellis.eu/news/ellis-p... by Oct. 31.
Both positions are equivalent (except for starting dates)
I wrote what are likely some very outdated thoughts on approximation theory. All of this was known in 1990. Which new perspectives am I missing?
#CNRSnews 🗞️ The CNRS mathematician and computer scientist Stéphane Mallat receives the CNRS 2025 Gold Medal for his achievements in applied maths and signal processing, including the JPEG 2000 image compression standard and mathematical foundations of AI.