Honored to receive a 2026 Sloan Research Fellowship in Mathematics. This wouldn't be possible without my entire research group at Princeton, and I'm grateful to the colleagues who supported my research. #SloanFellow
Honored to receive a 2026 Sloan Research Fellowship in Mathematics. This wouldn't be possible without my entire research group at Princeton, and I'm grateful to the colleagues who supported my research. #SloanFellow
Proud to celebrate the graduation of my PhD student Vinit Ranjan, who defended his thesis this month: "Beyond the Worst Case: Verification of First-Order Methods for Parametric Optimization Problems" ๐ Congratulations Dr. Ranjan!
Wishing everyone happy holidays! ๐ Feeling lucky to work with such a fantastic group of students. Here's to good research, great company, and Neapolitan pizza ๐
New preprint! ๐ Data-driven convergence guarantees for first-order methods via PEP + Wasserstein DRO.
Less pessimistic probabilistic rates that reflect how your solver actually behaves ๐ฏ
๐ arxiv.org/abs/2511.17834
๐ป github.com/stellatogrp/dro_pep
w/ Jisun Park & Vinit Ranjan #optimization #fom
Autonomous spacecraft are still a far off ideal ๐
But Ryne Beeson and @stella.to are taking the first steps in that direction by finding the optimal trajectories to a given planet or moon with the help of machine learning: ai.princeton.edu/news/2025/ai...
๐ New Arxiv Paper
Title: Data-driven Analysis of First-Order Methods via Distributionally Robust Optimization
Authors: Jisun Park, Vinit Ranjan, Bartolomeo Stellato
Read more: https://arxiv.org/abs/2511.17834
๐ข New in JMLR (w @rajivsambharya.bsky.social)! ๐ Data-driven guarantees for classical & learned optimizers via sample bounds + PAC-Bayes theory.
๐ jmlr.org/papers/v26/2...
๐ป github.com/stellatogrp/...
๐ข Our paper "Verification of First-Order Methods for Parametric Quadratic Optimization" with my student Vinit Ranjan (vinitranjan1.github.io/) is accepted in Mathematical Programming! ๐
๐ DOI: doi.org/10.1007/s10107-025-02261-w
๐ arXiv: arxiv.org/pdf/2403.033...
๐ป Code: github.com/stellatogrp/...
Iโm happy to share that Iโll be spending the fall semester at Princeton as a visiting student in the Department of Operations Research and Financial Engineering (ORFE), working with @stellato.io funded through the WASP program. If youโre in the area and would like to connect, feel free to reach out.
๐ Updated Arxiv Paper
Title: Exact Verification of First-Order Methods via Mixed-Integer Linear Programming
Authors: Vinit Ranjan, Jisun Park, Stefano Gualandi, Andrea Lodi, Bartolomeo Stellato
Read more: https://arxiv.org/abs/2412.11330
๐ New Arxiv Paper
Title: Data Compression for Fast Online Stochastic Optimization
Authors: Irina Wang, Marta Fochesato, Bartolomeo Stellato
Read more: https://arxiv.org/abs/2504.08097
๐ Gave a talk at the EURO @euroonline.bsky.social Seminar Series on "Data-Driven Algorithm Design and Verification for Parametric Convex Optimization"!
๐ฅ Recording: https://euroorml.euro-online.org/
Big thanks to Dolores Romero Morales for the invitation! ๐ #MachineLearning #Optimization #ORMS
The new season of the Robust Optimization Webinar (#ROW) starts this week. Our first presentation will take place this Friday, January 24, at 15:00 (CET).
Speaker: Peyman Mohajerin Esfahani (TU Delft)
Title: Inverse Optimization: The Role of Convexity in Learning
๐ New Arxiv Paper
Title: Exact Verification of First-Order Methods via Mixed-Integer Linear Programming
Authors: Vinit Ranjan, Stefano Gualandi, Andrea Lodi, Bartolomeo Stellato
Read more: https://arxiv.org/abs/2412.11330
What happens to the hyperparameters of learned optimizers? Turns out, we learn long steps! ๐
๐ Check out our latest work with @rajivsambharya.bsky.social!
Clustering is a powerful tool for decision-making under uncertainty!
Work w/ my students Irina Wang (lead) and Cole Becker, in collab. w/
Bart Van Parys
๐งต (7/7)
We have several examples in the paper. Here is a sparse portfolio optimization one. Clustering barely affects the solution objective. Speedups are more than 3 orders of magnitude. ๐งต (6/7)
By varying the number of clusters K, our method bridges Robust and Distributionally Robust optimization! We also derive theoretical bounds on 1) how to adjust the Wasserstein ball radius to compensate for clustering, and 2) how to exactly quantify the effect of clustering ๐งต (5/7)
In Mean Robust Optimization, we define an uncertainty set around the cluster centroids with weights defined by the amount of samples in each cluster. ๐งต (4/7)
Our procedure: we first cluster N data points into K clusters. Then, we solve the Mean Robust Optimization problem. ๐งต (3/7)
Robust optimization is tractable but, often, very conservative. Wasserstein Distributionally Robust Optimization is less conservative but, often, computationally expensive. How can we bridge the two? ๐งต (2/7)
Our paper "Mean robust optimization" has been accepted to Mathematical Programming: https://buff.ly/3B3VpIG
๐ฐ Arxiv (longer version): https://buff.ly/3CT4aWD
๐ฉโ๐ป Code: https://buff.ly/3ATqAXh
w/ Irina Wang, Cole Becker, and Bart van Parys
A thread ๐งต (1/7)๐
Cool! Thanks for creating this. Could you please add me? :)
arxiv.org/abs/2411.17668 Our postdoc zihan slays another COLT open problem! proceedings.mlr.press/v247/kornows...
๐ New Arxiv Paper
Title: Learning Algorithm Hyperparameters for Fast Parametric Convex Optimization
Authors: Rajiv Sambharya, Bartolomeo Stellato
Read more: http://arxiv.org/abs/2411.15717v1
๐๐๐
Congratulations @atlaswang.bsky.social :)
We are very excited to announce that the 2025 INFORMS Computing Society (ICS) Conference will take place March 14-16, 2025, in Toronto:
sites.google.com/view/ics-2025
Submissions for contributed talks are due on December 23.
We invite talks that showcase the dynamic interface of CS, AI & #ORMS.
New #arxiv bot for #optimization and #control! ๐
bsky.app/profile/arxi...
Thanks @tmaehara.bsky.social It looks great! I will let you know if I find anything wrong but from a brief look at the first post it looks exactly what one would expect. Thanks again!