As was the be shown.
As was the be shown.
Do you have recent work on differential privacy? Submit it to TPDP 2026 in Boston, whose deadline is in ~2 weeks.
TPDP is a lightly reviewed workshop, whose main purpose is getting researchers in DP together in one place. Dual submissions allowed (and encouraged!).
I was reading balls and bins section and it is great. Thanks!!
At #NeurIPS2025? Join us for a Social on Wednesday at 7 PM, featuring a fireside chat with Jon Kleinberg and mentoring tables.
Ft. mentors @djfoster.bsky.social @surbhigoel.bsky.social @aifi.bsky.social @gautamkamath.com and more!
Thanks for all the efforts. Great resource!
I'm excited to share this paper.
It answers a question that has bugged me for a long time: Can sample-and-aggregate be made more data-efficient? The answer is yes, but at a steep price in computational efficiency. See π§΅ for more details.
Also, it was a fun opportunity to add a new coauthor. π
The 3rd chapter of the "post-Bayes" seminar, focused on PAC-Bayes bounds, started yesterday. I gave a very introdutory talk, which is already on Youtube.
www.youtube.com/watch?v=hT-d...
There will be 5 more talks in this chapter, see the full schedule there: postbayes.github.io/seminar/
Thank you to Samsung for the AI Researcher of 2025 award! I'm privileged to collaborate with many talented students & postdoctoral fellows @utoronto.ca @vectorinstitute.ai . This would not have been possible without them!
It was a great honour to receive the award from @yoshuabengio.bsky.social !
You will find chapter 2 of this book interesting
Introduction to Matrix Analytic Methods in Stochastic Modeling
π¨ I am co-chairing ALT 2026 this year with Matus Telgarsky. The submission server is open so please submit your best work!
Deadline: Oct 2, 2025 AoE
Confernece: Feb 23-26, 2026 in Toronto!
Website: algorithmiclearningtheory.org/alt2026/
It took me a while, but I (finally) wrote a "short" (erm) note on the "polynomial+moments method" to prove testing or indistinguishability sample complexity lower bounds. Including the infamous Ξ©(k/log k) tolerant uniformity testing one.
Comments and feedback welcome!
π github.com/ccanonne/pro...
In our world of (noisy) data, error correction is everywhere, as Mary Wootters eloquently explains.
www.quantamagazine.org/how-can-math...
I wrote a post on how to connect with people (i.e., make friends) at CS conferences. These events can be intimidating so here's some suggestions on how to navigate them
I'm late for #ICLR2025 #NAACL2025, but in time for #AISTATS2025 #ICML2025! 1/3
kamathematics.wordpress.com/2025/05/01/t...
Full LaTeX source: https://pastebin.com/mA6KjUJs \begin{proposition}[Triangle-like inequality for KL divergence]\label{prop:kl-triangle} Let $P$, $R$, and $Q$ be probability distributions with $P$ being absolutely continuous with respect to $R$ and $R$ being absolutely conotinuous with respect to $Q$. Let $\kappa \in (1,\infty)$. Then \[ \dr{\text{KL}}{P}{Q} \le \frac{\kappa}{\kappa-1} \dr{\text{KL}}{P}{R} + \dr{\kappa}{R}{Q}, \] where $\dr{\text{KL}}{P}{Q} := \ex{X \gets P}{\log(P(X)/Q(X)}$ denotes the KL divergence and\\$\dr{\kappa}{R}{Q} = \frac{1}{\kappa-1} \log \ex{X \gets R}{(R(X)/Q(X))^{\kappa-1}}$ denotes the R\'enyi divergence of order $\kappa$. \end{proposition}
Taking Ξ±β1 gives a triangle inequality for KL divergence. This can also be proved using my favourite lemma. π
Is there a zoom link?
Excellent post from (my formerπ’ colleague) Nicholas Carlini on the differences between copyright law & privacy research.
In particular, from a privacy perspective, "was training data memorized?" is a yes/no question; we aren't trying to quantify how much data was memorized beyond "some" vs "none".
Nice! Could you share an example of their application in probability-related problems? or other problems.
Itβs finally out β and I got to blurb it!
Happy new year! Guest post on my blog by Abhradeep Thakurta, featuring his perspective on interviewing/hiring for faculty and industry research positions in CS/ML. I add some of my own comments at the end. Comments and perspectives welcome!
kamathematics.wordpress.com/2025/01/02/g...
the reviewer wants to write a summary without reading the paper and you made their job βveryβ difficult by not having a conclusion section.
Had a great time presenting my work at this fantastic workshop! Thanks to the organizers π
Iβll be at #NeurIPS2024 this week! Looking forward to presenting my joint work with Thomas Steinke(@stein.ke) and Jon Ullman(@thejonullman.bsky.social)
NeurIPS page with video: neurips.cc/virtual/2024...
Link to arxiv: arxiv.org/abs/2406.07407
Awesome π©
still havenβt found a good flight, will msg you if I can be there on Wednesdayπ
it seems the paper is behind paywall :(
Can you tell me a bit about the results?