So you want to skip our thinning proofs—but you’d still like our out-of-the-box attention speedups? I’ll be presenting the Thinformer at two ICML workshop posters tomorrow!
Catch me at Es-FoMo (1-2:30, East hall A) and at LCFM (10:45-11:30 & 3:30-4:30, West 202-204)
19.07.2025 07:04
👍 5
🔁 4
💬 0
📌 0
Your data is low-rank, so stop wasting compute! In our new paper on low-rank thinning, we share one weird trick to speed up Transformer inference, SGD training, and hypothesis testing at scale. Come by ICML poster W-1012 Tuesday at 4:30!
14.07.2025 18:29
👍 25
🔁 7
💬 2
📌 2
Low-Rank Thinning
The goal in thinning is to summarize a dataset using a small set of representative points. Remarkably, sub-Gaussian thinning algorithms like Kernel Halving and Compress can match the quality of unifor...
Off to ICML next week?
Check out my student Annabelle’s paper in collaboration with @lestermackey.bsky.social and colleagues on low-rank thinning!
New theory, dataset compression, efficient attention and more:
arxiv.org/abs/2502.12063
12.07.2025 16:27
👍 11
🔁 5
💬 0
📌 1