Throwback to my first macBook in 2018
Throwback to my first macBook in 2018
Being one of the rare weirdos using Nix on macOS is really paying off nowadays. Turns out that statelessly declaring your entire system and package config in a couple of text files is a superpower when combined with Clankers. My next machine will run NixOS again.
chardet was vipeforked to MIT and I have thoughts about it. Spoiler: I like it. lucumr.pocoo.org/2026/3/5/the...
I wonder how many reviewers will not look too deeply into the prompt injections and simply flag papers as malicious instead of writing reviews.
Yes, I think so. The prompt tries to add two unassuming sentences to LLM-written reviews.
It's an odd choice to have added it to the "LLM permissive" review track. I asked an LLM to proofread my review and it basically answered "Don't you want to mention the prompt injection attack"?
Time for Python bindings to go full circle
Yes, all the papers I have to review have the same honeypot. Glad I didn't flag the first one.
Poor ACs are probably being flooded by false flags.
Wasted half a day on a prompt injection attack in one of the papers I had to review, only to find out that it was probably the conference organizers who put it there (?)
πAward-winning: France's Prix Science Ouverte 2025
Switching AD backends in Julia used to mean rewriting your whole codebase. @gdalle.bsky.social @adrianhill.de got fed up β and built #DI instead.
News: t1p.de/58a65
@julialang.org @ecoledesponts.bsky.social #julialang @tuberlin.bsky.social
The award banner for Workworkwork
Yesterday, my puzzle book Workworkwork won the Thinky Award for the Best Pen and Paper Puzzle ( @thinkygames.com )!
In celebration I added 100 free community copies of the digital (PDF) version:
letibus.itch.io/www
Check out more about the game / get the physical copy here:
blazgracar.com/www
That sounds very interesting. Based on your talk about the stability of ODE solvers on dual numbers, I imagine Taylor polynomials pose similar challenges?
While I agree, it bugs me that most academics are simultaneously very eager to automate the writing of their code (also art).
Claude Code vs. the editor-industrial-complex, who will come out on top?
The most important one in my eyes: never force users of your package to type non-ascii characters.
What a timeline
Oh no, I missed the initial announcement... π₯²
Congratulations! π₯³
That @void.comind.network sticker is awesome!
βMaking things easy breaks systems that use difficulty as signalingβ @zey.bsky.social @neuripsconf.bsky.social
Youβre in luck, we offer a PyTorch implementation!
The paper, poster and code in @julialang.org and PyTorch can be found here:
neurips.cc/virtual/2025...
Joint work with Neal McKee, Johannes MaeΓ, Stefan BlΓΌcher and Klaus-Robert MΓΌller, @bifold.berlin.
I'm excited to see whether our idea translates to general MC integration over Jacobians and gradients outside of XAI. Please don't hesitate to talk to us if you have ideas for applications!
Our proposed SmoothDiff method (see first post) offers a bias-variance tradeoff: By neglecting cross-covariances, both sample efficiency and computational speed are improved over naive Monte Carlo integration.
To reduce the influence of white noise, we want to apply a Gaussian convolution (in feature space) as a low-pass filter.
Unfortunately, this convolution is computationally infeasible in high dimensions. Naive Monte Carlo approximation results in the popular SmoothGrad method.
Input feature attributions are a popular tool to explain ML models, increasing their trustworthiness. In this work, we are interested in the gradient of a given output w.r.t. its input features.
Unfortunately, gradients of deep NN resemble white noise, rendering them uninformative:
Join us today from 4:30 to 7:30 PM @neuripsconf.bsky.social Hall C,D,E #1006 for our poster on SmoothDiff, a novel XAI method leveraging automatic differentiation.
π§΅1/6
In San Diego for @neuripsconf.bsky.social this week, so hit me up to talk science. I'll make one of these longer announcement posts for our paper on Wednesday.
Awesome to see the French government recognize our work on open-source software! @ouvrirlascience.bsky.social
An even scarier thought is that similar systems of checks and balances are likely silently failing across all of society. Academia is just uniquely transparent, making it look like patient zero.