Mahdi Haghifam's Avatar

Mahdi Haghifam

@mahdihaghifam

Researcher in ML and Privacy. PhD @UofT & @VectorInst. previously Research Intern @Google and @ServiceNowRSRCH https://mhaghifam.github.io/mahdihaghifam/

115
Followers
133
Following
11
Posts
24.11.2024
Joined
Posts Following

Latest posts by Mahdi Haghifam @mahdihaghifam

As was the be shown.

07.02.2026 18:04 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Do you have recent work on differential privacy? Submit it to TPDP 2026 in Boston, whose deadline is in ~2 weeks.

TPDP is a lightly reviewed workshop, whose main purpose is getting researchers in DP together in one place. Dual submissions allowed (and encouraged!).

03.02.2026 16:40 πŸ‘ 7 πŸ” 7 πŸ’¬ 1 πŸ“Œ 0

I was reading balls and bins section and it is great. Thanks!!

25.01.2026 18:37 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

At #NeurIPS2025? Join us for a Social on Wednesday at 7 PM, featuring a fireside chat with Jon Kleinberg and mentoring tables.

Ft. mentors @djfoster.bsky.social @surbhigoel.bsky.social @aifi.bsky.social @gautamkamath.com and more!

26.11.2025 20:47 πŸ‘ 14 πŸ” 4 πŸ’¬ 0 πŸ“Œ 2

Thanks for all the efforts. Great resource!

20.11.2025 03:42 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I'm excited to share this paper.

It answers a question that has bugged me for a long time: Can sample-and-aggregate be made more data-efficient? The answer is yes, but at a steep price in computational efficiency. See 🧡 for more details.

Also, it was a fun opportunity to add a new coauthor. 😁

04.10.2025 16:05 πŸ‘ 10 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
Pierre Alquier (ESSEC) - PAC Bayes: introduction and overview
Pierre Alquier (ESSEC) - PAC Bayes: introduction and overview YouTube video by Post-Bayes seminar

The 3rd chapter of the "post-Bayes" seminar, focused on PAC-Bayes bounds, started yesterday. I gave a very introdutory talk, which is already on Youtube.

www.youtube.com/watch?v=hT-d...

There will be 5 more talks in this chapter, see the full schedule there: postbayes.github.io/seminar/

24.09.2025 09:16 πŸ‘ 26 πŸ” 7 πŸ’¬ 1 πŸ“Œ 0
Post image

Thank you to Samsung for the AI Researcher of 2025 award! I'm privileged to collaborate with many talented students & postdoctoral fellows @utoronto.ca @vectorinstitute.ai . This would not have been possible without them!

It was a great honour to receive the award from @yoshuabengio.bsky.social !

22.09.2025 00:35 πŸ‘ 14 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

You will find chapter 2 of this book interesting

Introduction to Matrix Analytic Methods in Stochastic Modeling

07.09.2025 16:47 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
ALT 2026 | ALT 2026 Homepage The 37th International Conference on Algorithmic Learning Theory

🚨 I am co-chairing ALT 2026 this year with Matus Telgarsky. The submission server is open so please submit your best work!

Deadline: Oct 2, 2025 AoE
Confernece: Feb 23-26, 2026 in Toronto!
Website: algorithmiclearningtheory.org/alt2026/

04.09.2025 19:01 πŸ‘ 13 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0

It took me a while, but I (finally) wrote a "short" (erm) note on the "polynomial+moments method" to prove testing or indistinguishability sample complexity lower bounds. Including the infamous Ξ©(k/log k) tolerant uniformity testing one.

Comments and feedback welcome!

πŸ“ github.com/ccanonne/pro...

17.08.2025 11:56 πŸ‘ 24 πŸ” 6 πŸ’¬ 3 πŸ“Œ 1
Preview
How Can Math Protect Our Data? | Quanta Magazine Mary Wootters discusses how error-correcting codes work, and how they are essential for reliable communication and storage.

In our world of (noisy) data, error correction is everywhere, as Mary Wootters eloquently explains.
www.quantamagazine.org/how-can-math...

08.08.2025 23:45 πŸ‘ 23 πŸ” 6 πŸ’¬ 0 πŸ“Œ 0
Preview
Tips on How to Connect at Academic Conferences I was a kinda awkward teenager. If you are a CS researcher reading this post, then chances are, you were too. How to navigate social situations and make friends is not always intuitive, and has to …

I wrote a post on how to connect with people (i.e., make friends) at CS conferences. These events can be intimidating so here's some suggestions on how to navigate them

I'm late for #ICLR2025 #NAACL2025, but in time for #AISTATS2025 #ICML2025! 1/3
kamathematics.wordpress.com/2025/05/01/t...

01.05.2025 12:57 πŸ‘ 69 πŸ” 19 πŸ’¬ 3 πŸ“Œ 2
Full LaTeX source: https://pastebin.com/mA6KjUJs

    \begin{proposition}[Triangle-like inequality for KL divergence]\label{prop:kl-triangle}
        Let $P$, $R$, and $Q$ be probability distributions with $P$ being absolutely continuous with respect to $R$ and $R$ being absolutely conotinuous with respect to $Q$.
        Let $\kappa \in (1,\infty)$.
        Then
        \[
            \dr{\text{KL}}{P}{Q} \le \frac{\kappa}{\kappa-1} \dr{\text{KL}}{P}{R} + \dr{\kappa}{R}{Q},
        \]
        where $\dr{\text{KL}}{P}{Q} := \ex{X \gets P}{\log(P(X)/Q(X)}$ denotes the KL divergence and\\$\dr{\kappa}{R}{Q} = \frac{1}{\kappa-1} \log \ex{X \gets R}{(R(X)/Q(X))^{\kappa-1}}$ denotes the R\'enyi divergence of order $\kappa$.
    \end{proposition}

Full LaTeX source: https://pastebin.com/mA6KjUJs \begin{proposition}[Triangle-like inequality for KL divergence]\label{prop:kl-triangle} Let $P$, $R$, and $Q$ be probability distributions with $P$ being absolutely continuous with respect to $R$ and $R$ being absolutely conotinuous with respect to $Q$. Let $\kappa \in (1,\infty)$. Then \[ \dr{\text{KL}}{P}{Q} \le \frac{\kappa}{\kappa-1} \dr{\text{KL}}{P}{R} + \dr{\kappa}{R}{Q}, \] where $\dr{\text{KL}}{P}{Q} := \ex{X \gets P}{\log(P(X)/Q(X)}$ denotes the KL divergence and\\$\dr{\kappa}{R}{Q} = \frac{1}{\kappa-1} \log \ex{X \gets R}{(R(X)/Q(X))^{\kappa-1}}$ denotes the R\'enyi divergence of order $\kappa$. \end{proposition}

Taking Ξ±β†’1 gives a triangle inequality for KL divergence. This can also be proved using my favourite lemma. 😁

19.04.2025 17:43 πŸ‘ 5 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

Is there a zoom link?

23.03.2025 21:07 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
What my privacy papers (don't) have to say about copyright and generative AI My work on privacy-preserving machine learning is often cited by lawyers arguing for or against how generative AI models violate copyright. This maybe isn't the right work to be citing.

Excellent post from (my former😒 colleague) Nicholas Carlini on the differences between copyright law & privacy research.

In particular, from a privacy perspective, "was training data memorized?" is a yes/no question; we aren't trying to quantify how much data was memorized beyond "some" vs "none".

11.03.2025 20:09 πŸ‘ 22 πŸ” 7 πŸ’¬ 2 πŸ“Œ 0

Nice! Could you share an example of their application in probability-related problems? or other problems.

26.01.2025 03:12 πŸ‘ 0 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0
Post image Post image

It’s finally out β€” and I got to blurb it!

22.01.2025 16:15 πŸ‘ 94 πŸ” 9 πŸ’¬ 2 πŸ“Œ 0
Post image

Happy new year! Guest post on my blog by Abhradeep Thakurta, featuring his perspective on interviewing/hiring for faculty and industry research positions in CS/ML. I add some of my own comments at the end. Comments and perspectives welcome!

kamathematics.wordpress.com/2025/01/02/g...

02.01.2025 15:27 πŸ‘ 15 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

the reviewer wants to write a summary without reading the paper and you made their job β€œvery” difficult by not having a conclusion section.

11.12.2024 12:42 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Had a great time presenting my work at this fantastic workshop! Thanks to the organizers πŸ™Œ

11.12.2024 12:31 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

I’ll be at #NeurIPS2024 this week! Looking forward to presenting my joint work with Thomas Steinke(@stein.ke) and Jon Ullman(@thejonullman.bsky.social)

NeurIPS page with video: neurips.cc/virtual/2024...

Link to arxiv: arxiv.org/abs/2406.07407

11.12.2024 12:22 πŸ‘ 10 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0

Awesome 🎩
still haven’t found a good flight, will msg you if I can be there on WednesdayπŸ™Œ

30.11.2024 01:15 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

it seems the paper is behind paywall :(
Can you tell me a bit about the results?

28.11.2024 18:33 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0