"Start grinding now so you can be a level 57 prompt mage" indeed seems silly. Though it seems perfectly plausible that if applying AI tooling in a way they changed your development process, it might draw on skills you don't currently use so much. Most of the discourse is pretty imprecise.
20.02.2026 09:39
π 1
π 0
π¬ 0
π 0
Minipost: Additional figures for per-query energy consumption of LLMs
Per-query energy consumption figures based on recent Lambda benchmarks
Some additional datapoints on per-query energy consumption of LLMs across a selection of newer models, thanks to figures from Lambda's model cards muxup.com/2026q1/minip...
17.02.2026 21:25
π 3
π 3
π¬ 0
π 1
Minipost: Additional figures for per-query energy consumption of LLMs
Per-query energy consumption figures based on recent Lambda benchmarks
Some additional datapoints on per-query energy consumption of LLMs across a selection of newer models, thanks to figures from Lambda's model cards muxup.com/2026q1/minip...
17.02.2026 21:25
π 3
π 3
π¬ 0
π 1
shandbox
A simple shared sandbox using unshare+nsenter.
There are many tools for unprivileged sandboxing on Linux. You should probably go use one of them. But I wrote shandbox to scratch my itch muxup.com/shandbox
/home/$user/sandbox shows up as /home/sandbox within the shared sandbox, which otherwise can only access explicitly mapped files/dirs.
11.02.2026 22:43
π 16
π 3
π¬ 1
π 0
Unique per-process sandboxing provides a tighther security boundary, but it's also darn convenient having this separate but shared low privileged space.
Preventing agents from reading/writing files you don't want them to access is one obvious use case, but there are others.
11.02.2026 22:43
π 2
π 0
π¬ 0
π 0
shandbox
A simple shared sandbox using unshare+nsenter.
There are many tools for unprivileged sandboxing on Linux. You should probably go use one of them. But I wrote shandbox to scratch my itch muxup.com/shandbox
/home/$user/sandbox shows up as /home/sandbox within the shared sandbox, which otherwise can only access explicitly mapped files/dirs.
11.02.2026 22:43
π 16
π 3
π¬ 1
π 0
Closing the gap, part 2: Probability and profitability
Welcome back to the second post in this series looking at how we can improve the performance of RISC-V code from LLVM.
One of the nice parts of #llvm is that often times you'll find yourself needing to do some sort of non-trivial analysis, but usually there's already a pass for it.
Here's how you can reuse a block frequency analysis to make a chess engine 7% faster on #riscv: lukelau.me/2026/01/26/c...
27.01.2026 13:49
π 12
π 1
π¬ 0
π 0
New blog post on the journey of the new --build-sea flag and how SEA injection works
joyeecheung.github.io/blog/2026/01...
26.01.2026 22:27
π 36
π 12
π¬ 2
π 0
GΓΆdel, Escher, Bachelorette
Thanks for the helpful auto-complete, Gmail.
24.01.2026 01:38
π 1239
π 262
π¬ 36
π 23
Graph showing the percentage of respondents deploying Go to various processor architectures. x86-64 85%, arm64 53%, x86 25%, Arm 16%, RISC-V 2%, S390x 1%, and others also at 1%
2% of Golang 2025 survey respondents are deploying their Go software to RISC-V. Take that, s390x! go.dev/blog/survey2...
22.01.2026 12:02
π 18
π 3
π¬ 1
π 0
I'm going to be putting my life savings into an inverse wingo ETF as soon as such a thing exists
24.01.2026 08:48
π 3
π 0
π¬ 0
π 0
If you're thinking of applying to PLISS, you've got three days left! pliss.org/2026/registr...
22.01.2026 14:59
π 6
π 3
π¬ 0
π 0
Graph showing the percentage of respondents deploying Go to various processor architectures. x86-64 85%, arm64 53%, x86 25%, Arm 16%, RISC-V 2%, S390x 1%, and others also at 1%
2% of Golang 2025 survey respondents are deploying their Go software to RISC-V. Take that, s390x! go.dev/blog/survey2...
22.01.2026 12:02
π 18
π 3
π¬ 1
π 0
Plus there's things like Anthropic models hosted on Google Vertex and Amazon Bedrock. In both cases they'd probably take a bit of a hit to keep customers with them...but there are surely limits.
21.01.2026 17:04
π 0
π 0
π¬ 0
π 0
Per-query energy consumption of LLMs
Can we reasonably use the InferenceMAX benchmark dataset to get a Wh per query figure?
Nice post! You might also be interested in my attempt to get some figures on inference energy usage, but coming from the perspective of the concrete data available in the InferenceMAX benchmarks. muxup.com/2026q1/per-q... Many many provisos and limitations of course
20.01.2026 23:56
π 15
π 0
π¬ 1
π 0
This release contains a bunch of PRs I recently submitted to mark features I contributed to as stable/release candidate. Here is a thread about them π§΅:
19.01.2026 18:42
π 53
π 8
π¬ 2
π 1
It would be neat to have the same chart but for volume (#wafers, normalised to the same size I guess)
15.01.2026 07:15
π 1
π 0
π¬ 1
π 0
Google Colab
A video and notebook on a short Introduction to SMT solvers
colab.research.google.com/github/philz...
www.youtube.com/watch?v=cI2s...
14.01.2026 19:55
π 5
π 1
π¬ 0
π 0
64% of WebKit non-Apple contributions, 20% of Chromium non-Google, 27% of Servo, 39% of test262, and it goes on.
And doing all this as a worker-owned, employee-run cooperative. The world would be a very different place if companies like Igalia were the norm rather than the exception in tech.
12.01.2026 19:44
π 131
π 19
π¬ 5
π 2
LLVM: The bad parts
Nikita Popov (who sadly isn't on Bluesky) has a great new post. LLVM: The bad parts www.npopov.com/2026/01/11/L...
11.01.2026 19:26
π 11
π 2
π¬ 0
π 0
Whatβs new in Python 3.14
Editors, Adam Turner and Hugo van Kemenade,. This article explains the new features in Python 3.14, compared to 3.13. Python 3.14 was released on 7 October 2025. For full details, see the changelog...
The move to 'forkserver' as the default start method for ProcessPoolExecutor in Python 3.14 is quite a gotcha docs.python.org/3/whatsnew/3... My code was probably broken on MacOS and Windows anyway due to those platforms defaulting to the 'spawn' method.
11.01.2026 08:18
π 1
π 0
π¬ 0
π 0
Per-query energy consumption of LLMs
Can we reasonably use the InferenceMAX benchmark dataset to get a Wh per query figure?
If we try to use the benchmark results from InferenceMAX to calculate a Watt-hours per LLM query, what do we get? What potential issues are there with the benchmark for this purpose (or in general)? My new post explores this muxup.com/2026q1/per-q...
07.01.2026 20:34
π 3
π 2
π¬ 0
π 0
Per-query energy consumption of LLMs
Can we reasonably use the InferenceMAX benchmark dataset to get a Wh per query figure?
I finally got round to finishing a much longer post on what we can conclude about Watt-hours per query based on the InferenceMAX benchmarks, and the limitations with this muxup.com/2026q1/per-q...
07.01.2026 20:40
π 0
π 0
π¬ 0
π 0
Per-query energy consumption of LLMs
Can we reasonably use the InferenceMAX benchmark dataset to get a Wh per query figure?
If we try to use the benchmark results from InferenceMAX to calculate a Watt-hours per LLM query, what do we get? What potential issues are there with the benchmark for this purpose (or in general)? My new post explores this muxup.com/2026q1/per-q...
07.01.2026 20:34
π 3
π 2
π¬ 0
π 0
LLVM Weekly - #627, January 5th 2026
LLVM Weekly - #627, January 5th 2026. Twelve years of LLVM Weekly, EuroLLVM CfP closing soon, GNU toolchain in 2025 summary, PCH to speed-up LLVM builds, LLVM ABI lowering library starting to land, and more llvmweekly.org/issue/627
05.01.2026 16:49
π 2
π 2
π¬ 0
π 0
Congratulations!
02.01.2026 18:31
π 1
π 0
π¬ 0
π 0
Wake up babe, new proposed RISC-V base ISA names just dropped. lists.riscv.org/g/tech-unpri...
How long until we see an rv32lbefx_mafc_zicntr_zicsr_zifencei_zba_zbb_zbs_zca_zfa in the wild?
20.12.2025 16:34
π 4
π 0
π¬ 0
π 0