#TPDP2026
One of the co-chairs is Amrita who is alum of our group. Way to go Amrita.
@someshjha
I am a professor in the computer sciences at UW-Madison. My technical interests in trustworthy ML, formal methods, and security. My other interests are Indian classical music, mindfulness, tennis, and pickleball.
#TPDP2026
One of the co-chairs is Amrita who is alum of our group. Way to go Amrita.
This is excellent, Tom. On my reading list.
Last year SAGAI workshop had the most attendees out of all the workshops at IEEE S&P.
Don't miss out.
Happy Diwali to all. May this coming year be full of joy and prosperity.
Congrats. The work looks cool!
Thanks for inviting me @simonsinstitute.bsky.social
The audience interaction was incredible.
Gorgeous. Where is it?
Looks great! What are you making? I can start driving from Madison now.:-)
In this work, we formally characterize the KAD scheme and uncover a structural vulnerability in its design that invalidates some core security principles.
We design a methodical adaptive attack, DataFlip, to exploit this fundamental weakness. Read about the details arxiv.org/abs/2507.05630
Recent defenses based on known-answer detection (KAD) have achieved near-perfect performance by using an LLM to classify inputs as clean or contaminated.
LLM-integrated applications and agents are vulnerable to prompt injection attacks, in which adversaries embed malicious instructions within seemingly benign user inputs to manipulate the LLMβs intended behavior.
The team is extremely open to working with other industrial and academic teams. Please reach out if you want to collaborate with our team.
Recently, we received a DARPA grant on the problem of LLM-assisted translation of C to Rust. The team consists of amazing set of PIs from UW, Berkeley, UIUC, and Edinburgh. Really excited about what we can do.
Full article can be found here: www.cs.wisc.edu/2025/07/15/t...
I have interacted with @gautamkamath.com and highly recommend him for this position. Please vote for him.
This research took a while to complete, but very proud of the result. Will do a detailed post soon.
SAGAI 2025 program is now complete. What an amazing program! Don't miss it.
sites.google.com/corp/ucsd.ed...
Welcome Lucy.
Air filters are not that expensive. I think even with the price increase you can afford it:-)
Co-organized with @earlence.bsky.social @mihaichr.bsky.social Khawaja Shams (Google) and John Mitchell (Stanford).
Details can be found at: sites.google.com/corp/ucsd.ed...
SAGAI'25 will investigate the safety, security, and privacy of GenAI agents from a system design perspective. We are experimenting with a new "Dagstuhl" like seminar with invited speakers and discussion. Really excited about this workshop at IEEE Security and Privacy Symposium.
Interesting! Didn't know that sifr and sunya are connected.
Eid Mubarak to anyone of my friends that celebrate it.
www.youtube.com/watch?v=5hwX...
Looks great! What is in it? Tofu?
These kind of comparisons are not very useful. Everyone should be charting their own course!
Excellent place to work!
Lorenzo graduated from my group and did some cool work on system and network security during his Ph.D. Congrats, Lorenzo!
Proud of you.
* removes reliance on public datasets, which was assumed in many existing integrity checks.
* enables advanced integrity checks, such as cross-client validation accuracy, which were impossible in prior secure FL approaches. We show these checks are effective under model poisoning attacks and client data distribution shifts.
Why SLVR? Building on secure Multi-party Computation (MPC), SLVR offers a fresh perspective on combining privacy and robustness in federated learning:
* leverages private client data while preserving the privacy guarantee of secure aggregation.
Have you ever wondered: In federated learning, what if we could leverage clients' private data without compromising privacyβwhat more could we achieve?
π We're excited to introduce SLVR (Securely Leveraging Client Validation for Robust Federated Learning).
Paper: arxiv.org/pdf/2502.08055