π― spot on. this is also my general take on why software engineers will always exist βοΈ but we're going to need far fewer of em to handle that 30% π
π― spot on. this is also my general take on why software engineers will always exist βοΈ but we're going to need far fewer of em to handle that 30% π
i still actively read and curate what goes into my obsidian notes cause it's sacred to me. but i worry about losing those extra π§ connections that happen when you actually type or write out the notes. π€
on the flip side, i'm consuming & learning way more with this approach.
not sure it's a good thing (yet) but i've started treating my @obsidian.md vault as my (private) persistent memory.
i go back and forth with an agent to research/learn, then dump "our" learnings into a note.
to refresh my learning, i ask the agent to read the note and then resume
if you're dying to try /remote-control with claude code, you should really try opencode, which is almost the exact same thing (but imo a simpler execution) !
@kau.sh and I bring you some agentic coding goodies on the pod's newsletter.
Check it out!
This was fun for both of us!
We share some of how the fragmented π is made + Tips from @iurysouza.dev and me, and a crazy way we used AI to solve very real audio problems π.
it's tempting to think of it as just an open-source Claude Code/Codex variant.
it's def. more ! built on a server-client architecture, it makes hopping between mobile, desktop, and web super easy.
opencode is basically openclaw for coding.
an interesting focus area in the world of agent orchestration is "access fluidity".
how easy is it to reach the agents doing the work on your machine - from your phone, terminal, IDE, or browser?
kau.sh/blog/opencod...
new episode is out! spot the easter π₯?
recent models have shown massive improvement owing to clever use of "modes" and subagent dispatch. it really clicked for me after chatting with @iurysouza.dev
listen to improve your fundamentals! (not just tactics).
ποΈCheck out the new @fragmentedpodcast.com episode on subagents!
They can be a huge unlock once you understand how it all works and you can get real benefits without having to a swarm them.
@kau.sh and I built an RTS mental model for how they work (yes, we did that). Check it out!
π€« (I use keynote with magic move to record the animations and play it as a video in the markdown π) π©°
I used to love Keynote for my slide decks but I've since become a full iA presenter convert (from the folks at iA writer). tastefully chosen constraints and display, all with the power of markdown.
agent skills are the most powerful construct for AI coding today. it's the quickest way to get better results with less context bloat.
but with great power comes responsibility... they can become a prompt injection vector.
this episode is a crash course on agent skills! π§
Everyone's talking about OpenClaw (Clawdbot) right now. But what makes it so extensible? Agent Skills!
modular SKILL mds that teach an agent how to do anything.
We do a deep dive in episode 304 on when to use them, how they work, and how not to get pwned.
π§ fragmentedpodcast.com/episodes/304
Stood up a repo with some of the claude plugins I've put together for personal projects. Things like setting up Metro DI, android a11y, and Airbnb's Showkase UI gallery library.
github.com/benoberkfell...
PSA: the official Anthropic GitHub org is github.com/anthropics (plural "s")
not the lucky but slightly disturbing zombie shooter github.com/anthropic
if you're installing official Claude plugins or skills and 404ing... now you know. 's' doing some heavy lifting there.
Public gist at https://gist.github.com/kaushikgopal/d92cd6a483c8b8683a2cd04137257866
people out there be complicating their claude code statusline. keep it simple and clean !
this is some good news from the @mozilla.org front. π
they listened to user feedback and added a specific feature. more of this please!
blog.mozilla.org/en/firefox/a...
Its really useful to get some intuition of how LLMs work so that you can get the most out of them.
This was tricky to do, but I guess @kau.sh and I did a good job of breaking down how LLMs work on a 20min, audio-only episode.
I always learn smth when I revisit this topic.
Check it out!ποΈ
If you can't explain it to a 6-year-old...
Neither of us is 6, but I definitely came out of this one with a much better understanding of how Large Language Models work π€
@iurysouza.dev and I try to explain the most important concept of our time in 20 minutes.
If you're coding with AI today, you definitely must have run across the term MCP.
Listen to the latest episode of @fragmentedpodcast.com to learn more about them, the trade-offs, and some of the most useful ones to install today.
@iurysouza.dev pointed me to one that wasn't even on my radar!
at least 20% of the dopamine from my AI coding is delivered directly by the font TX-02 (Berkeley Mono) π€β¨
treat yourselves to the ultimate TUI upgrade... usgraphics.com/products/ber...
The new season starts now. ποΈ
The AI Coding ladder
@kau.sh and I dive into AI coding paradigms and the simple loop driving these agents.
Give it a listen!
Ep 301 is out ! π€πͺ
Make sure to listen to the slick tips at the end the episode.
@iurysouza.dev shares a nifty way of transferring your sessions between agents and I mention how I get better results from agents!
I always suspected Skills would gain wide adoption given it's power, progressive disclosure etc. Codex, Gemini etc. have all already adopted it.
Does anyone know if AAIF donation is on the roadmap for Skills?
Also curious, if AAIF is recognized as the main body now for all this
So MCP & AGENTS are under AAIF (Linux Foundation): "open" in the true governance sense.
Agent Skills is different though: it's a public spec on a github repo but still "maintained by Anthropic".
A common question I get is, βWhich AI model should I use for X task?β Models evolve so rapidly that I never felt like I could give an answer that would stay relevant for more than a month or two. This year, I finally feel like I have a stable set of model choices that consistently gives me good results. Iβm jotting it down here to share more broadly, and to trace how my own choices evolve over time. GPT 5.2 (High) for planning and writing, including writing plans Opus 4.5 for anything coding, task automation, and tool calling Geminiβs range of models for everything else: Gemini 3 (Thinking) for learning and understanding concepts (underrated) - Gemini 3 (Flash) for my go-to quick-answer questions - Nano Banana (obv) for all image generation - NVIDIAβs Parakeet for voice transcription
AI model choices as of today!
kau.sh/blog/ai-mode...
Been a Fragmented listener since before I got my first dev job. Surreal to now be joining @kau.sh as co-host for this new Al focused season!
Looking forward to getting into the ins and outs of Al coding! What actually works, what doesn't, and how to think about all of it.
Let's do this! ποΈπ€
Big changes coming to @fragmentedpodcast.com
New direction. New cohost. New episode numbering.
Full story drops in the episode coming out on Monday. But if you want a sneak peek (updates in our email newsletter).
buttondown.com/fragmentedcast
haha exactly what I do too. it's just annoying cause I have to strong a cable down there to keep it charged - feels excessive for that one mighty key