That indicates to me it’s either bad or really, really bad. I’ll wait to see how it plays out.
That indicates to me it’s either bad or really, really bad. I’ll wait to see how it plays out.
There is a long track record of the FBI arresting ethnic Chinese professors. Several were exonerated, but not all.
www.nytimes.com/2023/03/07/m...
www.nytimes.com/2020/05/11/u...
www.nytimes.com/2022/01/24/s...
www.nytimes.com/2017/05/10/u...
Literally low heat on the burner?
Weird, they already had a PQ contest that ended in 2020: github.com/sweis/cacr-c...
One of the winners was rejected by NIST for weaknesses though.
The list of accepted talk at @rwc.iacr.org is now available: rwc.iacr.org/2025/accepte... Early registration ends 26 February. CC: programme co-chair @nicksullivan.org
National Cryptologic Foundation event about accelerating post-quantum crypto adoption features some NSA speakers:
cryptologicfoundation.org/news-events/...
Proud to be a member of the 1992 Team USA of blocklists with @filippo.abyssdomain.expert, @sockpuppet.org, and @leak.bsky.social
Tour of WebAuthn by Adam Langley:
www.imperialviolet.org/tourofwebaut...
Abstract. Differentially private (DP) heavy-hitter detection is an important primitive for data analysis. Given a threshold t and a dataset of n items from a domain of size d, such detection algorithms ignore items occurring fewer than t times while identifying items occurring more than t + Δ times; we call Δ the error margin. In the central model where a curator holds the entire dataset, (ε,δ)-DP algorithms can achieve error margin $\Theta(\frac 1 \varepsilon \log \frac 1 \delta)$, which is optimal when d ≫ 1/δ. Several works, e.g., Poplar (S&P 2021), have proposed protocols in which two or more non-colluding servers jointly compute the heavy hitters from inputs held by n clients. Unfortunately, existing protocols suffer from an undesirable dependence on log d in terms of both server efficiency (computation, communication, and round complexity) and accuracy (i.e., error margin), making them unsuitable for large domains (e.g., when items are kB-long strings, log d ≈ 10⁴). We present hash-prune-invert (HPI), a technique for compiling any heavy-hitter protocol with the log d dependencies mentioned above into a new protocol with improvements across the board: computation, communication, and round complexity depend (roughly) on log n rather than log d, and the error margin is independent of d. Our transformation preserves privacy against an active adversary corrupting at most one of the servers and any number of clients. We apply HPI to an improved version of Poplar, also introduced in this work, that improves Poplar’s error margin by roughly a factor of $\sqrt{n}$ (regardless of d). Our experiments confirm that the resulting protocol improves efficiency and accuracy for large d.
Image showing part 2 of abstract.
Hash-Prune-Invert: Improved Differentially Private Heavy-Hitter Detection in the Two-Server Model (Borja Balle, James Bell, Albert Cheu, Adria Gascon, Jonathan Katz, Mariana Raykova, Phillipp Schoppmann, Thomas Steinke) ia.cr/2024/2024
What's that? FIPS 140-3 validated ML-KEM? No biggie
aws.amazon.com/blogs/securi...
Also evergreen reply...
Google had a similar claim in 2022 about sampling quantum circuits. They said it would take a classical computer 10,000 years and researchers showed it would take a few hours not long after:
www.science.org/content/arti...
Google posting about new noisy quantum computer with better error correction:
blog.google/technology/r...
NEW EPISODE!
Our esteemed guests @justinschuh.com and @matthewdgreen.bsky.social joined us to debate whether
`Dual_EC_DRBG` was intentionally backdoored by the NSA or 'just' a major fuckup:
securitycryptographywhatever.com/2024/12/07/d...
www.youtube.com/watch?v=i0eo...
Post by Meta about integrating logging into their crypto libraries:
engineering.fb.com/2024/11/12/s...
If you’re curious about the design and analysis of encrypted algorithms and encrypted databases, I’m putting together a collection of resources at encryptedsystems.org
Simons is doing a bootcamp and program on obfuscation, proof systems, and secure computation: simons.berkeley.edu/programs/cry...
@nyzn.bsky.social This is the peak tech career outcome.
TIL Zebrafish have three copies of the SLC6A4 gene that encodes a serotonin transporter. Some mollusks have two copies.
Do they have been feeding zebrafish drugs for a long time.
https://pubmed.ncbi.nlm.nih.gov/21522057/
An octopus decides between a pill and another octopus.
On the right, a member of the species of the appropriate gender to invoke sexual attraction.
On the left, a giant pill of E stamped with a happy face.
I want to see the grant proposal to give giant octopuses some MDMA to see what happens: https://www.cell.com/current-biology/fulltext/S0960-9822(18)30991-6
I think I said “I think I have that backwards” right after. Mental lapse.
You’re right. RSAP doesn’t solve factoring, but vice versa obviously does.
NEW EPISODE!
Why the hell do we think any of this cryptography stuff is secure anyway, with @saweis.net!
https://podcasts.apple.com/us/podcast/why-do-we-think-anything-is-secure-with-steve-weis/id1578405214?i=1000618720739
I didn't see you were CISO of Lacework. Congratulations!
I won’t spoil the dinosaur-like creatures for you.
Spoiler: Jellyfish. Saved you a click.
They say local journalism is dying.
It would be nice to see some concrete use cases with comparisons to best alternatives.
For example, they cite a healthcare example. Is anyone using FHE in healthcare or is it hypothetical?
Twitter is sending out incoherent, unhinged recruiting messages.