Preserving Privacy in Large Language Models: A Survey on Current Threats and Solutions
Michele Miranda, Elena Sofia Ruzzetti, Andrea Santilli et al.
Action editor: Tian Li
https://openreview.net/forum?id=Ss9MTTN7OL
#privacy #anonymizing #secure
18.02.2025 15:07
๐ 3
๐ 1
๐ฌ 0
๐ 0
Our survey on ๐ฃ๐ฟ๐ฒ๐๐ฒ๐ฟ๐๐ถ๐ป๐ด ๐ฃ๐ฟ๐ถ๐๐ฎ๐ฐ๐ ๐ถ๐ป ๐๐๐ ๐ has been published in ๐ง๐ฟ๐ฎ๐ป๐๐ฎ๐ฐ๐๐ถ๐ผ๐ป๐ ๐ผ๐ป ๐ ๐ฎ๐ฐ๐ต๐ถ๐ป๐ฒ ๐๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด ๐ฅ๐ฒ๐๐ฒ๐ฎ๐ฟ๐ฐ๐ต (๐ง๐ ๐๐ฅ)! ๐
Check it out!
13.02.2025 08:24
๐ 2
๐ 0
๐ฌ 0
๐ 0
Many LLM uncertainty estimators perform similarly, but does that mean they do the same? No! We find that they use different cues, and combining them gives even better performance. ๐งต1/5
๐ openreview.net/forum?id=QKR...
NeurIPS: Sunday, East Exhibition Hall A, Safe Gen AI workshop
13.12.2024 12:36
๐ 11
๐ 4
๐ฌ 1
๐ 0
Interested in learning how to evaluate uncertainty in LLMs?
Check out our work at NeurIPS!
Feel free to reach out for a chat!
12.12.2024 17:12
๐ 3
๐ 1
๐ฌ 1
๐ 0
Come by to our poster at CLiCit!
04.12.2024 16:19
๐ 3
๐ 0
๐ฌ 0
๐ 0
If youโre interested in mechanistic interpretability, I just found this starter pack and wanted to boost it (thanks for creating it @butanium.bsky.social !). Excited to have a mech interp community on bluesky ๐
go.bsky.app/LisK3CP
19.11.2024 00:28
๐ 36
๐ 8
๐ฌ 3
๐ 2