Since its first version, kipppunkt/build builds itself. Every PR in the repo is now opened by the agent.
Try it out: github.com/kipppunkt/bu...
Since its first version, kipppunkt/build builds itself. Every PR in the repo is now opened by the agent.
Try it out: github.com/kipppunkt/bu...
When I first saw it working, me reviewing from my phone while the AI iterated on my feedback, I understood this is the way forward.
We don't need to reinvent the wheel. We have decades worth of tools and practices. We just need to let AI meet us where we already are.
kipppunkt/build fixes that. You review along the way, leave feedback, change requirements on the fly, all directly from the PR.
As a bonus, it lets you run multiple agents in parallel with dependency management and automatic merge conflict resolution.
"Isn't that just a ralph loop?"
Kinda, but not really. Ralph loops are basically waterfall. You hit go and hope for the best and get no feedback until it's done. They fail the same way waterfall always has.
It picks up your requirements, implements them, opens PRs on GitHub, and responds to your review comments. You review from wherever. The couch, a cafΓ©, the beach.
You don't need a TUI or pair-programming with a chatbot. Just requirements in, PRs out.
Here's a glimpse of how software engineering is going to look soon.
I wanted to spend more time with my newborn daughter and less time sitting in a terminal watching an AI type. So I built kipppunkt/build. Today I'm sharing it.
github.com/kipppunkt/bu...
If you're building TUIs with Claude Code or similar AI agents, I made an agent skill for Ink.
It organizes the docs so the agent can pull in only what it needs on demand, saving you lots of tokens.
Five-minute timelapse shot on a Pixel 9 on my last day of vacation in South Tyrol, Italy.
At some point in my life, I'll move back here, get myself the best telescope money can buy, and spend my evenings taking photos of galaxies and nebulas far away.
I interviewed someone for a dev position the other day. Remotely.
Asked the candidate a very vague question. Got an extremely thorough answer. Really impressive.
Left a weird aftertaste, though.
I just typed the question into ChatGPT.
Same answer π
Tbf, the bug was very subtle which is also why it took so long to be found. But you're right, the whole feature seems like a high risk, low reward situation. It's cool tech but nothing I'd want to use in prod.
It's kinda crazy how React went from "it's just a super lightweight rendering library, bro" to "whoopsie, our server-side execution layer's complex custom data-format serializer had a prototype-pollution bug and now it's a CVE-10 RCE."
I put everything into this article including the link to an example repository and the full prompt file, so that you can just copy/paste it into your repo:
It uses many of GitHub Copilot's capabilities (instruction files, prompt files, some tools & configs) for prompt and context engineering.
With that, it's reliably able to one-shot entire test suites in my day-to-day work.
See for yourself (select 2x speed):
1. The AI figures out business logic & edge cases and then proposes tests based on that.
2. You approve or modify the list.
3. Then Copilot implements everything autonomously while you do whatever.
It's a little bit more complicated than that but that's the gist of it.
AI-generated unit tests suck.
Claude overshoots. GPT-5 is lazy af. No model has proper common sense.
I tinkered around for a long time to get GitHub Copilot to actually work as expected. The result is this workflow.
@anthropic.com is acquiring @bun.sh.
I didn't have that on my bingo card.
www.anthropic.com/news/anthrop...
I wanted to announce a new blog article today.
My blog runs on @cloudflare.social Pages.
Good that I didn't.
"I dare you, human"
Think you're safe because you're in a metro station? Think again! Little Berta up there is watching you. Luckily, sheβs nice enough to leave hints on the floor to spare those who paid attention in Stats 101.
You'd be mistaken if you thought pigeon droppings on the floor were just a random annoyance. Quite the opposite! They're a real-time heatmap showing you the probability of being hit by pigeon poop at any given spot.
And you thought all that statistics knowledge was useless!
One killer use case for AI agents is making sense of minified JavaScript files.
I just built a post-install script that monkey-patches a package I depend on. What would have easily taken me half a day was finished in under an hour.
Very weird and I'm kinda nostalgic for RxJS. But seeing an `effect()` inside a constructor is something I probably still have to get used to.
Turns out my team and I are partially doing Angular ... again. Haven't touched the framework since v16. Feels like ages.
π οΈ If your team has issues to work through:
We do an exercise at offsites called Elephants, Tigers & Paper Tigers.
- Elephants are things that the group isnβt talking about but needs to
- Tigers are things threatening the team, risks
- Paper tigers are things that seem like risks, but arenβt
Sounds very interesting! I have a lot on my plate at the moment as well, but I could spare a few hours per week for a promising project.
Is there a GitHub repo or something similar?
Sounds awesome! Where can I sign up?
Oh boy, you're not talking about running different frameworks in iframes and plugging them together, but about running multiple frameworks in a single browsing context?
That's the most interesting part! The state needs to be serializable, though.
Is there a good way to test at least some of those requirements automatically in CI? I only know of Axe but that won't test for screen reader interoperability for example.
Getting a full afternoon of uninterrupted coding is so nice.
I don't get that often but today is one of those days. Feels great!
const breadcrumbsForChildren = generateBreadcrumbs(children);
Sometimes programming just feels too much like HΓ€nsel and Gretel.