Thank you very much - hope you're well!
Thank you very much - hope you're well!
I'm thankful also to my committee members, among them @mmitchell.bsky.social, for many interesting questions and discussions.
And, of course, immensely grateful to @julialang.org and its community for having me and having had such a strong impact on my research and the whole Ph.D. experience.
The Ph.D. is wrapped indeed as of last Wednesday!
It was a pleasure and privilege to be working under the supervision @informusiccs.bsky.social and Arie van Deursen these past few years.
Thesis: www.patalt.org/thesis/
Defence: www.patalt.org/content/talk...
FOSS: @taija.org #julialang
Lieben Dank, Ronny π
Really love the show so far π
Screen shot of selected git commit history.
Graduation highlight: my former students and now colleagues gifted me a *PhD Wrapped* of my git commit history and it's a bloodbath. Enjoy* www.patalt.org/content/talk...
*viewer discretion advised
Karen Hao
Nicky Woolf
Thomas Germain
I'm co-hosting a new BBC podcast! It's called The Interface, and it's all about how tech is rewiring your week and your world.
www.bbc.com/mediacentre/...
My pass:
Back from a mostly offline vacation. Has anything noteworthy happened?
Havenβt read the full paper, but in my mind, this is just an inevitable consequence of extremely high degrees of freedom and MI just exists in the context of that
I donβt think multiplicity of explanations is necessarily problematic, in fact it may often be desirable e.g. in the context of algorithmic recourse. But itβs definitely important to be transparent about it when interpreting and communicating results in MI and XAI more broadly
"Reject" despite mostly positive reviews
Somehow I'm not as fazed this time, because we have done a ton of robustness checks, the theory checks out and criticism was largely about presentation. I guess the 45 page appendix didn't help ...
Iβm avoiding actual eye contact at all costs
I did use RCall.jl back then to extend Plots.jl functions with ggplot2 (incredible scenes) and even those monstrosities still work, so props to #rstats I guess.
I love the fact that I can go back to my 3-4yo #julialang project, run `julia +1.8`, then `[ instantiate` and
EVERYTHING. JUST. WORKS. I LOVE IT*
*Julia, not my 3-4yo code
I've had little time for #julialang dev work in recent weeks as I've been wrapping up my thesis. Can't wait to get back to it soon and DifferentiationInterface.jl will be one of the first places to look at.
This work and the chart should go a long way in terms of explaining "why Julia" to AI folks:
1. Autodiff through anything using anything (one day ...)
2. Multiple dispatch fosters extensibility and interoperability of different ecosystems that OOP just doesn't (in practice).
3. See 1.
Moving fast and breaking things is diο¬icult to justify when things are humans
... but not area of expertise I'm afraid so just thinking out loud
hmm I guess you're thinking of something along the lines of probing activations (see e.g. arxiv.org/abs/2404.14082) but that just maps from learned representations to some output. Honestly the best I can think of for attribution is membership inference attacks: www.cs.cornell.edu/~shmat/shmat...
A comparison of automatic differentiation paradigms between Python and Julia: - In Python, one chooses the autodiff framework first (PyTorch / JAX), then the appropriate scientific library - In Julia, one writes the scientific library first, then one tries to make it compatible with several autodiff frameworks (Enzyme, Zygote, etc)
How to make #autodiff user-friendly? What lies beyond the safety of Python-world? Why does it matter for scientific machine learning?
All this, and more, in our latest preprint with @adrhill.bsky.social! Spoiler alert: it describes the most useful software I ever wrote.
arxiv.org/abs/2505.05542
Hello Friends!
I'm on the job market now!
I have a oodles of knowledge for all the software performance engineering tricks in Rust, Julia and other systems languages and would love to work with teams that are looking to skill up in those respects, from back ends to big data crunching!
Brother and my smiling after the finish
Me running in asphalt somewhere in DΓΌsseldorf.
Zoomed in version of the previous pic showing a Julia stick placed on my number tag.
Ran my first marathon last Sunday with my brother and a friend. Thought the #julialang sticker might help but we ran hella slow π
In all seriousness, Iβve learned a lot from the work of @mmitchell.bsky.social and others in her field and Iβve also learned a lot from Hard Fork. Thereβs disagreements but I feel that thereβs also certain overlaps and you+Kevin have a fantastic platform to discuss them using >300 characters.
I happen to know a great podcast where this conversation could be continued π
I was today years old when I learned that #revealjs (standard HTML presentation format for @quarto.org) has #vim bindings
Assuming it can be solved and assuming hallucinations become less of an issue (o4-mini π), there is still a very valid question about how environmentally sustainable this is vis-a-vis traditional search (and evidence has been pretty damning, e.g. techwontsave.us/episode/229_...)
Hard Fork did a good episode on this a while ago when Googleβs AI summaries still recommended people eat rocks. How sustainable is it to essentially take away revenue from your own suppliers? Maybe this can be solved, but Iβm not convinced it serves us or Google well in the long term.
Iβve been positively surprised by Braveβs AI summaries lately, because they induce me to click on links to multiple sources. That helps with one major concern: diminishing incentives for folks to actually freely supply the content youβre going to just AI summarize.
Profile photo of me in my favorite coral jumper.
Girlfriend said it was time for a professional headshot