I will be presenting on this work at @informs.bsky.social
in Atlanta on Tuesday, October 28, 4:15-5:30 pm, in the Machine Learning and Optimization session (Building B, Level 2, Room B208)!
I will be presenting on this work at @informs.bsky.social
in Atlanta on Tuesday, October 28, 4:15-5:30 pm, in the Machine Learning and Optimization session (Building B, Level 2, Room B208)!
We learned acceleration algorithms for fast parametric convex optimization. Only 10 training instances used for each example and robustness is guaranteed with PEP! Joint work w/ Jinho Bok, Nik Matni, and George Pappas!
๐ข New in JMLR (w @rajivsambharya.bsky.social)! ๐ Data-driven guarantees for classical & learned optimizers via sample bounds + PAC-Bayes theory.
๐ jmlr.org/papers/v26/2...
๐ป github.com/stellatogrp/...
We learned the hyperparameters to accelerate algorithms over a family of problems. Turns out that we only need 10 training instances in each example and learn long steps for (prox) gd! Check out this work with @stellato.io
paper: arxiv.org/pdf/2411.15717
code: github.com/stellatogrp/...