Adrian Hill's Avatar

Adrian Hill

@adrianhill.de

PhD student at @bifold.berlin, Machine Learning Group, TU Berlin. Automatic Differentiation, Explainable AI and #JuliaLang. Open source person: adrianhill.de

728
Followers
651
Following
126
Posts
07.02.2024
Joined
Posts Following

Latest posts by Adrian Hill @adrianhill.de

Throwback to my first macBook in 2018

06.03.2026 12:54 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Being one of the rare weirdos using Nix on macOS is really paying off nowadays. Turns out that statelessly declaring your entire system and package config in a couple of text files is a superpower when combined with Clankers. My next machine will run NixOS again.

06.03.2026 12:49 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 1
Preview
AI And The Ship of Theseus Slopforks: what happens when a library gets rewritten with AI?

chardet was vipeforked to MIT and I have thoughts about it. Spoiler: I like it. lucumr.pocoo.org/2026/3/5/the...

05.03.2026 15:30 πŸ‘ 50 πŸ” 12 πŸ’¬ 10 πŸ“Œ 5

I wonder how many reviewers will not look too deeply into the prompt injections and simply flag papers as malicious instead of writing reviews.

05.03.2026 21:40 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Yes, I think so. The prompt tries to add two unassuming sentences to LLM-written reviews.

It's an odd choice to have added it to the "LLM permissive" review track. I asked an LLM to proofread my review and it basically answered "Don't you want to mention the prompt injection attack"?

05.03.2026 21:40 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Time for Python bindings to go full circle

05.03.2026 12:45 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Yes, all the papers I have to review have the same honeypot. Glad I didn't flag the first one.

05.03.2026 12:36 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Poor ACs are probably being flooded by false flags.

04.03.2026 17:05 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Wasted half a day on a prompt injection attack in one of the papers I had to review, only to find out that it was probably the conference organizers who put it there (?)

04.03.2026 16:47 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 1
Preview
Open-Source Award for DifferentiationInterface.jl DifferentiationInterface.jl, co-developed by BIFOLD researcher Adrian Hill, wins one of France’s Open Science Awards for making cutting-edge modeling and optimization more flexible, efficient, and ope...

πŸ†Award-winning: France's Prix Science Ouverte 2025

Switching AD backends in Julia used to mean rewriting your whole codebase. @gdalle.bsky.social @adrianhill.de got fed up β€” and built #DI instead.

News: t1p.de/58a65

@julialang.org @ecoledesponts.bsky.social #julialang @tuberlin.bsky.social

17.02.2026 16:15 πŸ‘ 14 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0
The award banner for Workworkwork

The award banner for Workworkwork

Yesterday, my puzzle book Workworkwork won the Thinky Award for the Best Pen and Paper Puzzle ( @thinkygames.com )!
In celebration I added 100 free community copies of the digital (PDF) version:
letibus.itch.io/www
Check out more about the game / get the physical copy here:
blazgracar.com/www

05.02.2026 15:42 πŸ‘ 41 πŸ” 9 πŸ’¬ 1 πŸ“Œ 2

That sounds very interesting. Based on your talk about the stability of ODE solvers on dual numbers, I imagine Taylor polynomials pose similar challenges?

05.02.2026 11:39 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

While I agree, it bugs me that most academics are simultaneously very eager to automate the writing of their code (also art).

04.02.2026 12:51 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Claude Code vs. the editor-industrial-complex, who will come out on top?

27.01.2026 19:31 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The most important one in my eyes: never force users of your package to type non-ascii characters.

25.01.2026 15:55 πŸ‘ 2 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

What a timeline

23.01.2026 23:42 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Oh no, I missed the initial announcement... πŸ₯²

20.01.2026 18:15 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Congratulations! πŸ₯³

07.01.2026 16:16 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

That @void.comind.network sticker is awesome!

04.12.2025 19:31 πŸ‘ 2 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

β€žMaking things easy breaks systems that use difficulty as signalingβ€œ @zey.bsky.social @neuripsconf.bsky.social

03.12.2025 23:10 πŸ‘ 20 πŸ” 4 πŸ’¬ 2 πŸ“Œ 0

Youβ€˜re in luck, we offer a PyTorch implementation!

03.12.2025 19:47 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
NeurIPS Poster Smoothed Differentiation Efficiently Mitigates Shattered Gradients in ExplanationsNeurIPS 2025

The paper, poster and code in @julialang.org and PyTorch can be found here:
neurips.cc/virtual/2025...

Joint work with Neal McKee, Johannes Maeß, Stefan Blücher and Klaus-Robert Müller, @bifold.berlin.

03.12.2025 19:36 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

I'm excited to see whether our idea translates to general MC integration over Jacobians and gradients outside of XAI. Please don't hesitate to talk to us if you have ideas for applications!

03.12.2025 19:36 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 1
Post image

Our proposed SmoothDiff method (see first post) offers a bias-variance tradeoff: By neglecting cross-covariances, both sample efficiency and computational speed are improved over naive Monte Carlo integration.

03.12.2025 19:36 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

To reduce the influence of white noise, we want to apply a Gaussian convolution (in feature space) as a low-pass filter.
Unfortunately, this convolution is computationally infeasible in high dimensions. Naive Monte Carlo approximation results in the popular SmoothGrad method.

03.12.2025 19:36 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Input feature attributions are a popular tool to explain ML models, increasing their trustworthiness. In this work, we are interested in the gradient of a given output w.r.t. its input features.
Unfortunately, gradients of deep NN resemble white noise, rendering them uninformative:

03.12.2025 19:36 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Join us today from 4:30 to 7:30 PM @neuripsconf.bsky.social Hall C,D,E #1006 for our poster on SmoothDiff, a novel XAI method leveraging automatic differentiation.
🧡1/6

03.12.2025 19:36 πŸ‘ 11 πŸ” 6 πŸ’¬ 1 πŸ“Œ 0

In San Diego for @neuripsconf.bsky.social this week, so hit me up to talk science. I'll make one of these longer announcement posts for our paper on Wednesday.

02.12.2025 19:05 πŸ‘ 7 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Awesome to see the French government recognize our work on open-source software! @ouvrirlascience.bsky.social

02.12.2025 19:03 πŸ‘ 15 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0

An even scarier thought is that similar systems of checks and balances are likely silently failing across all of society. Academia is just uniquely transparent, making it look like patient zero.

28.11.2025 07:36 πŸ‘ 7 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0