Opus fast mode is fun
Opus fast mode is fun
Pushed an update to my OpenClaw @Railway template, added secure access to OpenClaw TUI.
Its opt-in ONLY using env variable, requires basic auth credentials and can run a single concurrent session. Super handy for quick debugging and setup checks.
Fun weekend project, blog using SSH
Stack
- Everything in Go
- TUI by @charmcli
- @tursodatabase DB
- Deployed to @flydotio
I was switching between the Today view and projects in "Manage" (my open-source project management tool) a lot, so I've added some keyboard shortcuts.
0 - Today view
1 to 5 - Switch projects
Thanks to AI, no one buys SaaS anymore, they just ask Claude to build it
Spent the last few weeks updating my neovim and shell config. I always start the year with an improved setup, it's become an annual tradition โ๏ธ
Demo time, this time we're migrating my Redis server from OVH instance (US) to Hetzner in EU.
The flow:
> pause container
> backup volume on old server
> restore volume in new server
> create and start container
I tested this setup with BunnyDNS across 4 regions (US, Singapore, Europe, Sydney) and it has worked really well so far!
Now when I deploy a service, I replicate to every region. The agent, proxy, and control plane handle the rest.
So any server can handle any challenge. Once ready, the generated certificate syncs to every server.
This needed perfect coordination between control plane, Traefik proxy, and server agents for certificate sync.
Instead of each proxy creating its own certificate, I delegate to the control plane. When a custom domain is added, the control plane triggers certificate generation. Any challenge request hitting the domain (could be any server in the fleet) gets rewritten by the proxy to the control plane.
When a proxy server tries to generate SSL certificates, the HTTP-01 ACME challenge needs to hit the correct server. With GeoDNS routing users everywhere, this couldn't be done reliably.
The solution: Central ACME.
My goal: any node can become a proxy, terminate TLS for any service, and route to available servers. Always hitting the nearest server.
Since I couldn't afford Anycast / BGP, I went with GeoDNS. DNS always resolves to nearest server, but there was a big problem: ACME challenges.
One of the harder problems I solved in my open source PaaS was proximity steering.
Basically load balancing so users get routed to the physically closest server with least latency.
For some reason, @bunjavascript was eating up the CPU on my Next app. Switching to Node 22 dropped the CPU usage from 40% to 8%. github.com/techulus/cl...
In our server fleet, any node can become an edge, all edges can terminate TLS, weโve central ACME and certificate sync across edges and thanks to all this, users will get their requests served faster from the nearest edge!
I must thank @AmpCode for helping me crack the final bits.
Quick update on my open source PaaS: Iโve added support for proximity steering using just GeoDNS, zero provider specific code needed.
The past few days have mostly involved fixing bugs and improving GitHub deployments on my PaaS platform. Since my fleet consists of mixed architecture (amd64 and arm64), I need to ensure multi-architecture builds work smoothly.
Is GeoDNS a viable option? ๐ค
The one problem that seems very hard to solve in my open-source PaaS is proximity steering. When I run servers across the globe with workloads and proxy to terminate TLS, I should always route users to the nearest server. One way to achieve this is using Anycast, but it's very expensive.
AI has pushed us to think CLI first.
If your tool can run from CLI, an LLM can use it. You don't need an MCP server.
Love these diagrams by Claude
Anyone starting their software engineering career should read this: addyosmani.com/blog/21-les...
Amazing blog post by @addyosmani! This is so true in large organisations, it took me a while to realise this.
Another quick demo of my PaaS #buildinpublic
> Private networking using Wireguard mesh
This doesn't look comfy
> What I'm building = a distributed peer-to-peer container fabric: agent-driven, pull-based orchestration, private-first networking, disposable workloads, proxy nodes to handle ingress, and finally, a touch of magic.
> Coolify/Dokploy = centralised control which is SSH-driven, push-based orchestration, not multiple server native.