New blog post!
How does #AI change software engineering?
dlants.me/ai-se.html
New blog post!
How does #AI change software engineering?
dlants.me/ai-se.html
Official post is up! Thanks @miamioh-press.bsky.social !
#booksky #writing #publishing
New blog post! Why I don't think AGI is imminent dlants.me/agi-not-immi...
#ai #llm #agi
I've found it invaluable as I get ramped up on a new system and have a hard time keeping all the info I'm learning straight.
The perfect pairing for my #neovim #nvim ai plugin https://
github.com/dlants/magenta.nvim
I really think this could be a great way to manage documentation for a large/complex code repo, since it makes docs a lot more discoverable. Docs that get found end up getting maintained.
Agents make it really easy to crawl your code base and distill the actual code + comments into markdown summaries. Or to audit your markdown summaries to verify that they're still in-line with what the code actually does.
The main goal of this is that I have a growing collection of various process / system design / incantation notes, and I wanted my coding assistant to be able to discover and apply them for me. I was keeping them in skill.md files, but they were getting kinda large, and discovery wasn't great.
Spent the weekend hacking together [a personal knowledge base cli](https://
github.com/dlants/pkb) / agent skill.
The big win for me was replacing the puppeteer mcp server with the puppeteer skill. It exposes a programmatic interface (which means fewer round trips between the agent and the tool), and takes up a lot fewer tokens in my system prompt due to progressive disclosure!
github.com/dlants/dotfi...
I've been working on adding skills to my #nvim #neovim #ai plugin, magenta, and I think it mostly works now!
github.com/dlants/magen...
I made the transition to using just ghostty after watching this video www.youtube.com/watch?v=o-qt...
I was able to replace my "project switcher" with hammerspoon github.com/dlants/dotfi... Still really miss some parts of tmux though, like github.com/ghostty-org/... though.
Iโve released a new Neovim plugin: deepl.nvim.
It provides a simple DeepL translation workflow directly inside Neovim โ no switching windows or copying text around.
#neovim #nvim #lua #deepl
github.com/walkersumida...
After reading about it for a bit, it seems like Claude Code sprinkles in system reminders periodically to encourage the agent to use the skills jannesklaas.github.io/ai/2025/07/2...
I imagine the real trick behind skills is whatever they're doing to get the agent to make use of the progressive disclosure efficiently. Maybe tuning? Maybe they periodically remind the agent that skills are available?
Even prior to skills, I experimented with introducing progressive disclosure to my context md files - having it be a "directory", pointing to other md files that had more details about specific topics, but I find that the agent just ignores those links.
hey @simonwillison.net, I don't suppose you have done any reverse engineering of the skills implementation within claude code?
I'm really curious how they instruct the agent to use the skill descriptions, and how they keep the skills from being lost in the early context as the agent works.
Rational Funk with Dave King. youtu.be/XOTZttTVEqY?...
And that's how you get the gig.
I wrote a new blog post about the culture of hard work in tech, perfectionism, self-compassion and my dreams for the tech industry. You can read it here:
Self compassion and the disposable engineer dlants.me/self-compass...
@udini.bsky.social really enjoying the book! For me it's been a nice reminder to resume my efforts to let go of chasing measurable strength and chasing fatigue. Leaving the gym when you're not tired is tough!
strongerbyscience is great for no-nonsense, evidence-backed nutrition advice. Some of it angles towards "getting ripped" but it is generally grounded in science and approaches it with a great 80/20 mentality. You might want to start from this guide: www.strongerbyscience.com/diet/
One of my dreams for my #nvim / #neovim ai plugin is to use it as a writing / research assistant. I took a small step forward on that today by improving the way it handles reading pdf files github.com/dlants/magen...
They don't :(
So the best way to get access to gpt-5 for this purpose is to set up a script that burns through $50 of tokens to get you to tier 2. That's absolutely bonkers! Just let me preload my account with $50 of credit to get access to the higher tier!
30K TPM in tier 1 basically prevents you from being able to use a coding assistant. And there's no way to pay your way to a higher tier, you have to get there by organically spending $50
I develop a coding assistant and OpenAI feels really hostile towards developers. Their sdk is a mess (does not adhere to jsonschema, is poorly documented).
Recently I tried to run some experiments on gpt-5 and immediately ran into their rate limits / tiering system.
Open source tools should just let you define a regex for finding context files. (Like `autoContext` in magenta github.com/dlants/magen...)
- I added authentication via anthropic's max and pro plans, which lets you use tokens the same way as claude code (thanks to the opencode folks for their open-source implementation for how to do this). This is much cheaper relative to using the pay-per-token API for heavier users.
- I renamed "@compact" to "@fork", and also changed it from a forced tool use to just a regular tool. This means the agent can think before using it, improving the quality of the summary, and also this now takes advantage of the conversation cache, making these tool calls a lot faster and cheaper
Some updates for my #neovim #nvim ai plugin github.com/dlants/magen...
- I improved the turn-taking. You can now interrupt the agent by sending additional messages or use the @async prefix to your message to enqueue it to be sent at the next turn-taking opportunity