put all my shell scripts in one repo.
Feel free to use them.
github.com/regenrek/sh...
@kevinkern.dev
Teaching & building AI apps to reduce the workload π Building instructa.ai Academy Curated Ai Prompts instructa.ai/en/ai-prompts πΊπ¦ Raised 20k for UA π¨βπ» coder & design Website: kevinkern.dev π Vienna, Austria
put all my shell scripts in one repo.
Feel free to use them.
github.com/regenrek/sh...
Dropbox β Nextcloud
Github Runner β Local
Web Hosting β Coolify + Hetzner
Postman β bruno
What I'm still keeping:
Notion
Vercel
Over the last few weeks I've been moving away from a few cloud providers.
I've wanted to do this for a while, not because they're bad, but because a lot of them are built for bigger teams/enterprise and I don't need most of those features.
Slack β RocketChat
Pandadoc β Docuseal
6. told claude to migrate all the gemini 3 frontend code to vite, use package.json (aistudio code does inline imports) and port styles to tailwind/shadcn. + remove all hardcoded color schemas.
7. some more iterations with opus + gpt-5.2 (wire real data instead of mock data + add jira api)
8. done
4. create a new tauri desktop app template
5. paste all the code into the boilerplate
6. add deepwiki & context 7 mcp
Jira is so slow and bloated that it was worth to spend some time to build my own desktop app (that syncs incoming issues)
How?
1. wrote a spec in gpt-5.2
2. paste spec to aistudio (Gemini 3 Pro) for frontend
3. paste spec to gpt pro for initial mvp backend code
I mean, I really like Opus. It's like those friends who tell the best dressed up stories. And in your inner self, you know it's mostly just for the show.
So peer review is back.
That's another reason why GPT-5.2 reviews Opus-4.5.
great at rushing through the codebase and identifying mistakes. At the same time, it's sloppy and sometimes stops halfway instead of following through.
still overpromises, but with gpt-5.2 oversight we consistently land on high quality results.
I've also added a skill that checks other agents' work and progress if we face a blocker.
Since Codex is taking forever for larger codebases, I split my tasks across to 5-8 agents.
the important part is that it's split by domain so you don't overwrite things.
But Codex is pretty good at recognizing others' work and stops or asks if it should proceed.
One tip:
Cursors composer-1 model is pretty fast. Which means you could also try to run the above prompt via cursor CLI.
was playing around with Codefetch and oracle to gather context and send hard tasks to GPT-5.1 Pro.
here's a slash command I ended up with.
GPT 5.1 Pro dropped
little annoying that Codex removes things "for good".
I'm telling it this from now on.
Instead of switching between MCP servers like chrome devtools, context7, linear... you create a single workflow (e.g., frontend_tools) that orchestrates them. Your MCP client (Cursor, Claude Code) sees minimal needed tools.
Just released Oplink.β‘οΈIt combines multiple MCP servers into unified workflows.
Nice addon for codex. Oracle sends your hard tasks to gpt 5 pro.
Tip: You can pair it with codefetch to set up the context first.
TIL: ChatGPT can create simple sound files for you (I was looking for free sounds for codex-1up).
A fresh codex-1up update is out.
- Quick install "npm install -g codex-1up"
- Add custom sounds when a task is finished
- Guided setup (easy for beginners)
- Lot of cleanup, improvements
- New updated profiles to start quickly
in the next version of codex-1up, a new fast guided setup.
A new stealth model, polaris-alpha, has appeared on openrouter. Time to give it a try.
Multiple way to serve context. But you always should keep an eye on your context window and don't pollute it with unnecessary or wrong information. Thats your "job" as agentic engineer.
- The agent is able to read linters
- The agent has access to externals tools served via CLI/MCP
- The agent has memory of your past decisions
- The agent has access to tasks written in .md
What does high context mean?
- The agent is able to run tests against your codebase.
- The agent has access logs
- The agent has rules available (AGENTS .md)
- The agents has access to semantic search (thats why its still useful to index your codebase)
- The agents has relevant docs available
To explain: With better models and a codebase that serves as context you can just talk to the model.
The less context you have, the more specific your prompt should be.
My earlier take is deprecated.
I've updated today the tanstack starter project + agent rules. You can build now quickly web apps with one the best web dev stack available.