Joel Martin's Avatar

Joel Martin

@kanaka

Creator of mal/make-a-lisp, noVNC, websockify, clj-protocol, conlink, instacheck, miniMAL, wam, wac, warpy, raft.js (github.com/kanaka). Interested in Clojure, WebAssembly/wasm, AI/LLMs, JS, Rust, network protocols, web browsers, testing, etc.

58
Followers
56
Following
75
Posts
21.08.2023
Joined
Posts Following

Latest posts by Joel Martin @kanaka

Post image

Babashka 1.12.215: Revenge of the TUIs
blog.michielborkent.nl/babashka-1.1...
One of the most exciting babashka release thus far!

(If you encounter the blog post link on HN, please upvote it (please don't share a direct HN link as this is penalized by HN's algorithm). Thank you!)
#clojure #babashka

17.02.2026 11:21 πŸ‘ 32 πŸ” 7 πŸ’¬ 1 πŸ“Œ 2

What portal from hell opens and issues forth hoards of ladybugs in the middle of New England January? I need to know

20.01.2026 18:18 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
UNIX V4 tape successfully recovered : Crucial early evolutionary step found, imaged, and ... amazingly ... works

UNIX V4 (written in C in the 1970s) found on 50+ year old 9-track tape is restored, compiled, then boots.
www.theregister.com/2025/12/23/u...

24.12.2025 15:00 πŸ‘ 7 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

How long until people start running an inetd (look it up, kids) for our growing piles of local mcp servers?

25.12.2025 02:36 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Photoshop 1.0 Web β€” Port Log

I wrote a post about GPT-5.2-Codex-Max porting Photoshop 1.0 to the web with only a few human turns of the crank here: photoslop-1.com/port and the result is here: photoslop-1.com

I was inspired by @simonwillison.net 's recent post about sending the agent-LLM off to port an HTML parser

25.12.2025 02:51 πŸ‘ 2 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
Zawinski's Law A catalog of the laws guiding software development. Especially useful for individual contributors, new managers, and product managers who want to build well-made software.

Zawinski's law (www.laws-of-software.com/laws/zawinski/) should read "Every program attempts to expand until it can *chat with an AI.* Those programs which cannot so expand are replaced by ones which can."

I'm not sayin' I like it, but it is what it is

03.12.2025 00:48 πŸ‘ 3 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0

And yes, the "large real-world value" is that I personally want this to work well right now.

01.12.2025 19:26 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Free idea for an unsaturated LLM benchmark with large real-world value:
- input: raster image of a diagram
- output: a dot/mermaid diagram with correct nodes, edges, labels, fonts, colors, subgraphs/groupings, etc.

Bonus: it is automatable and has variable complexity.

01.12.2025 19:24 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

And to be clear, mal is in the training data and has been since the beginning (there are tells), and mal has almost 1000 tests. But it is still a notable milestone for me. And I have some private evals that also show a big jump in capability and code quality in the latest models.

26.11.2025 16:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Sonnet 4.5 (in opencode) was a major step change for my unofficial eval: complete an implementation of a mal interpreter (all 11 steps), without any intervention. Previous models/agents couldn't get past the second step without help. This is a task that takes me 6-16 hours by hand (from scratch).

26.11.2025 16:08 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Folks: Use AI in hard mode.

Use it to be stronger, not as a substitute for your strength. And for goodness sake, don't let it be your voice. It's a great audience and editor to give you feedback and to train and refine your voice. Your voice is you β€” don't lose it!

24.11.2025 22:25 πŸ‘ 3 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Prototypes are the most powerfully tool I know of for increasing the productivity of conversations about what software should do and how it should work

They're tools for thinking

21.09.2025 20:37 πŸ‘ 36 πŸ” 2 πŸ’¬ 3 πŸ“Œ 0
Post image

I added a basic tool use chapter to the rlhfbook if you've long heard about it and never spent the time to learn the fundamentals of what it actually is and how the data is formatted.
(I'm becoming more tool-pilled)

16.06.2025 14:01 πŸ‘ 7 πŸ” 1 πŸ’¬ 2 πŸ“Œ 1
An LLM Coding Agent in 6 incremental steps and about 140 lines of python I will show you how to create a working LLM coding agent in 6 incremental steps (and about 140 lines of code). We will use the python LiteLLM library and use Github Copilot which means all you need is...

Book added to my reading queue! BTW, if it's interesting I have a couple of posts on building really simple coding agents:
* LiteLLM based: kanaka.github.io/blog/litellm...
* llm library based: kanaka.github.io/blog/llm-age...
Code is MIT so feel to use, adapt, or reference it for concrete examples.

16.06.2025 18:45 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
The Apple "Reasoning Collapse" Paper Is Even Dumber Than You Think We're this far into reasoners and neither hypesters nor skeptics really understand their significance. Also: Read Toulmin.

I know a lot of my followers are pretty AI skeptical. Which is fine. But for people trying to find a middle way in-between ridiculous waves of pro-AI and anti-AI propaganda you may find this relevant: open.substack.com/pub/mikecaul...

13.06.2025 20:51 πŸ‘ 57 πŸ” 10 πŸ’¬ 3 πŸ“Œ 0
Preview
Design Patterns for Securing LLM Agents against Prompt Injections This a new paper by 11 authors from organizations including IBM, Invariant Labs, ETH Zurich, Google and Microsoft is an excellent addition to the literature on prompt injection and LLM …

"Design Patterns for Securing LLM Agents against Prompt Injections" is an excellent new paper that provides six design patterns to help protect LLM tool-using systems (call them "agents" if you like) against prompt injection attacks

Here are my notes on the paper simonwillison.net/2025/Jun/13/...

13.06.2025 13:35 πŸ‘ 148 πŸ” 19 πŸ’¬ 6 πŸ“Œ 1
Post image Post image Post image Post image

By surveying workers and AI experts, this paper gets at a key issue: there is both overlap and large mismatches between what workers want AI to do & what AI is likely to do.

AI is going to change work. It is critical that we take an active role in shaping how it plays out. arxiv.org/pdf/2506.06576

13.06.2025 14:37 πŸ‘ 68 πŸ” 12 πŸ’¬ 2 πŸ“Œ 1
Post image Post image Post image

Lots of neat stuff in this paper showing 30% of US python commits use AI

As of the end of 2024: β€œthe annual value of AI-assisted coding in the United States at $9.6βˆ’14.4 billion, rising to 64βˆ’96 billion if we assume higher estimates of productivity effects reported by randomized control trials”

12.06.2025 17:00 πŸ‘ 45 πŸ” 13 πŸ’¬ 4 πŸ“Œ 2

Weird, I don't know why the embedded link card is broken. The full link in the post works though.

13.06.2025 01:02 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I posted a new version of my coding agent tutorial: : kanaka.github.io/blog/llm-agent-in-five-steps This version uses the llm library by @simonwillison.net The process is now only 5 steps and the final agent is less than 80 lines of python. Still uses GitHub Copilot (no API/credit card needed).

13.06.2025 00:01 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Digital Equipment Corporation no more Tech giants come and go

39 years ago today....

nedbatchelder.com/blog/202506/...

09.06.2025 14:50 πŸ‘ 19 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

I can recognize good writing but can't generate it easily. I can arrive at decent prose, but the phase from unstructured thoughts until first draft is 90% of the effort/time. I think I'm not alone.

Me: write notes
LLM: first draft
Me+LLM: update until well-written, well-structured, and in my voice.

08.05.2025 18:10 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Hallucinations in code are the least dangerous form of LLM mistakes A surprisingly common complaint I see from developers who have tried using LLMs for code is that they encountered a hallucinationβ€”usually the LLM inventing a method or even a full …

Completely agree - code is such an interesting application because you get a basic form of "fact checking" for free, unlike with prose simonwillison.net/2025/Mar/2/h...

08.05.2025 16:41 πŸ‘ 6 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

I think the one counterexample is that it's good at writing a lot of code, which is creating more text. My rough rule of thumb is that LLMs are good when it's easy to verify the output as correct, which is quite a lot of things

08.05.2025 16:30 πŸ‘ 9 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
The Hare programming language

mal/make-a-lisp just hopped up to 89 different languages with the addition of a Hare implementation (harelang.org). Contributed by github.com/einsiedlersp...

06.05.2025 00:02 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Paused video of a dog (Cleo) taking a treat.

Paused video of a dog (Cleo) taking a treat.

llm -f video-frames:cleo.mp4 'describe key moments' -m gpt-4.1-mini

And the output from the model (transcript here):

    The sequence of images captures the key moments of a dog being offered and then enjoying a small treat:

        In the first image, a hand is holding a small cupcake with purple frosting close to a black dog’s face. The dog looks eager and is focused intently on the treat.
        The second image shows the dog beginning to take a bite of the cupcake from the person’s fingers. The dog’s mouth is open, gently nibbling on the treat.
        In the third image, the dog has finished or is almost done with the treat and looks content, with a slight smile and a relaxed posture. The treat is no longer in the person’s hand, indicating that the dog has consumed it.

    This progression highlights the dog’s anticipation, enjoyment, and satisfaction with the treat.

Total cost: 7,072 input tokens, 156 output tokensβ€”for GPT-4.1 mini that’s 0.3078 cents (less than a third of a cent).

llm -f video-frames:cleo.mp4 'describe key moments' -m gpt-4.1-mini And the output from the model (transcript here): The sequence of images captures the key moments of a dog being offered and then enjoying a small treat: In the first image, a hand is holding a small cupcake with purple frosting close to a black dog’s face. The dog looks eager and is focused intently on the treat. The second image shows the dog beginning to take a bite of the cupcake from the person’s fingers. The dog’s mouth is open, gently nibbling on the treat. In the third image, the dog has finished or is almost done with the treat and looks content, with a slight smile and a relaxed posture. The treat is no longer in the person’s hand, indicating that the dog has consumed it. This progression highlights the dog’s anticipation, enjoyment, and satisfaction with the treat. Total cost: 7,072 input tokens, 156 output tokensβ€”for GPT-4.1 mini that’s 0.3078 cents (less than a third of a cent).

New release of LLM, accompanied by a new plugin - you can now use llm-video-frames to turn a video file into a sequence of JPEGs and feed those into a long-context vision model like GPT-4.1-mini simonwillison.net/2025/May/5/l...

05.05.2025 17:53 πŸ‘ 38 πŸ” 4 πŸ’¬ 1 πŸ“Œ 0
Filtering GitHub actions by changed files How to limit what GitHub workflows run based on what files have changed.

In which our hero fine-tunes when GitHub actions are run, and battles mysterious YAML and GitHub oddities along the way: nedbatchelder.com/blog/202505/...

04.05.2025 14:06 πŸ‘ 5 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

I do a similar thing to determine which mal implementations to run tests for (parallel matrix strategy) depending on what files have changed: github.com/kanaka/mal/b... I could probably simplify my python filter script using that dorny action.

04.05.2025 23:12 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

@emollick.bsky.social You might be interested in this: bsky.app/profile/kana... I made it to be lower barrier than other agent tutorials I've seen: no API sign-up/credits needed (via free GitHub Copilot) and hopefully it is approachable for less experienced python devs (incremental, diagrams, etc)

03.05.2025 15:43 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0