How I Built My Website | Isaac Flath
Read "How I Built My Website" - Technical writing by Isaac Flath on software development, AI, and machine learning.
I visit my own website because it makes me happy. AI wrote every line of code, but I directed everything. If a thoughtless prompt can make what you make, why would I come to you? I can prompt AI myself
This post shows the core skill for using AI: taste
isaacflath.com/writing/how-...
08.03.2026 13:52
๐ 0
๐ 0
๐ฌ 0
๐ 0
Learning from Failure: MonsterUI | Isaac Flath
Read "Learning from Failure: MonsterUI" - Technical writing by Isaac Flath on software development, AI, and machine learning.
I spent months building MonsterUI. Now, when people ask if they should use it, I hesitate.
MonsterUI redefined what I love to build, yet I couldn't explain why I stopped using it. I felt uncomfortable admitting that I failed
Here's what happened and what I learned
isaacflath.com/writing/lear...
26.01.2026 23:04
๐ 0
๐ 0
๐ฌ 0
๐ 0
Yes, though this is a saying because the last mile/last 10% taking lots of time is not new with vibe coding. So getting that first 90% in an hour is a massive win!
23.01.2026 21:11
๐ 0
๐ 0
๐ฌ 0
๐ 0
AI generates code faster than humans can read. When the machine outpaces the reviewer, the team loses understanding. We need to keep humans in control.
Jake Levirne of SpecStory, shared how they adapt the review process to the task's risk for this
elite-ai-assisted-coding.dev/p/legible-ai...
23.01.2026 19:12
๐ 0
๐ 0
๐ฌ 0
๐ 0
I used an AI agent to build a Discord bot. I wanted it to save images from a channel to S3. The agent wrote the code, explained deployment, and debugged it when it went silent. It's a small tool I use daily.
Link: isaacflath.com/writing/disc...
14.01.2026 20:42
๐ 0
๐ 0
๐ฌ 0
๐ 0
How I Use My AI Session History
Step-by-step example of the tool I use, how I use the UI, and how I use it agentically.
I often ask: Why this way? What were the trade-offs? Was X considered? Why not Y?
Ex: "Why is the react editor in the main python repo and not its own module?"
AI logs and other context is a key part of answering that. Here's what I do ๐
elite-ai-assisted-coding.dev/p/how-i-use...
30.12.2025 18:13
๐ 1
๐ 0
๐ฌ 0
๐ 0
I quit my job ~6 months ago to focus on learning and having a bigger impact.
The AI coding course with Eleanor Berger is one (of several) projects that is better than I imagined on both fronts
Cohort 1 was a huge success, and Cohort 2 in January is gonna be even better ๐
22.12.2025 01:54
๐ 1
๐ 0
๐ฌ 1
๐ 0
My favorite thing about reducing token usage for coding agents with better search with @mixedbreadai
12.12.2025 16:57
๐ 0
๐ 0
๐ฌ 0
๐ 0
Multi-vector search means semantic search is back for coding agents...and it was always clear it would
11.12.2025 21:58
๐ 0
๐ 0
๐ฌ 0
๐ 0
60% of tokens are spent searching and exploring the codebase.
Agents need better search.
11.12.2025 16:59
๐ 0
๐ 0
๐ฌ 0
๐ 0
mgrep helps explore complex documents and pdfs
10.12.2025 22:01
๐ 0
๐ 0
๐ฌ 0
๐ 0
mgrep with Founding Engineer Rui Huang
The Problem with grep for AI Agents
For a detailed write-up, and the full recording of this talk, go here!
elite-ai-assisted-coding.dev/p/mgrep-wit...
10.12.2025 18:28
๐ 0
๐ 0
๐ฌ 0
๐ 0
mgrep is also multimodal. It can natively index and search images, diagrams, and PDFs in your repository.
An agent can find relevant information in visual assets that are completely invisible to text-only tools
Very useful for legal, e-commerce and many other domains
And cats
10.12.2025 18:28
๐ 0
๐ 0
๐ฌ 1
๐ 0
Boosting Claude: Faster, Clearer Code Analysis with MGrep
I ran an experiment to see how a powerful search tool could improve an LLMโs ability to understand a codebase.
The results from their internal tests with Claude are significant. Using mgrep led to:
๐ค 53% fewer tokens used
๐ 48% faster response
๐ฏ 3.2x better quality
By getting the right context immediately, agents stay on track. I saw similar results.
elite-ai-assisted-coding.dev/p/boosting-...
10.12.2025 18:28
๐ 0
๐ 0
๐ฌ 1
๐ 0
Agents use mgrep for broad, semantic exploration and grep for precise symbol lookups
Instead of grep commands guessing at keywords, an agent makes a semantic query
mgrep "how is auth implemented?"
It then uses grep for precise function/class name searches.
No guessing ๐
10.12.2025 18:28
๐ 0
๐ 0
๐ฌ 1
๐ 0
mgrep is a command-line tool that brings semantic search to your codebase, letting agents search by intent, not just keywords.
It's much faster than grep alone, and works much better than traditional semantic search.
10.12.2025 18:28
๐ 0
๐ 0
๐ฌ 1
๐ 0
mgrep with Founding Engineer Rui Huang
The Problem with grep for AI Agents
AI coding agents burn tokens guessing keywords for grep and flood the context window with noise
There's a better way.
I hosted a talk by @ruithebaker, a founding engineer at @mixedbreadai, about their solution.
mgrep ๐งต
elite-ai-assisted-coding.dev/p/mgrep-wit...
10.12.2025 18:28
๐ 1
๐ 0
๐ฌ 1
๐ 0
Step 5: Extreme Quantization
ColBERT goes one step further. After PQ, it calculates the "residual" error (the small difference between the original and the approximation). Then, it quantizes that error, often down to just 1 or 2 bits per value!
01.12.2025 19:13
๐ 0
๐ 0
๐ฌ 1
๐ 0
Step 3: Store which cluster/centroid each piece belongs to
Step 4: reconstruct by looking up centroids and combining
01.12.2025 19:13
๐ 0
๐ 0
๐ฌ 1
๐ 0
Step 2: Cluster each collection of sub-vectors separately to find the centroids
01.12.2025 19:13
๐ 0
๐ 0
๐ฌ 1
๐ 0
Step 1: Split each embedding in half (make sub-vectors).
01.12.2025 19:13
๐ 0
๐ 0
๐ฌ 1
๐ 0
However, AI embeddings aren't single numbers; they're vectors (long lists of numbers). This is where Product Quantization (PQ), comes in. It's specifically designed to compress these vectors.
It "refactors" similar embeddings to reduce duplication by using k-means clustering. Let's break it down.
01.12.2025 19:13
๐ 0
๐ 0
๐ฌ 1
๐ 0
In the simplest form, quantization is a bit like rounding numbers. You give up precision to save space.
With scalar quantization, instead of storing a full 64-bit number, you can store an 8-bit code representing its approximate value (8x compression ratio).
01.12.2025 19:13
๐ 0
๐ 0
๐ฌ 1
๐ 0
mgrep (by MixedBread) is how I've been using this. It works with any coding agent as a CLI tool.
More embeddings (token vs chunk) means more info, which makes sense. But how can that scale?
It comes down to the quantization
isaacflath.com/blog/2025-1...
Here's the core ideas:
01.12.2025 19:13
๐ 0
๐ 0
๐ฌ 1
๐ 0
Q: Why is semantic search for code coming back? (Cursor, mgrep, etc)
A: Multi-vector architecture
Q: Why do I care?
MUCH less token usage + better responses if used
Q: What makes it work?
A: Token (not chunk) level embeddings with extreme quantization
Here's what I learned ๐งต
01.12.2025 19:12
๐ 0
๐ 0
๐ฌ 1
๐ 0