Finally trying out Wispr Flow. I'm not usually someone who talks at their devices, but I figured if this is going to be normal, I should learn how to do it and challenge myself. Side benefit is you can talk faster than you can type.
Finally trying out Wispr Flow. I'm not usually someone who talks at their devices, but I figured if this is going to be normal, I should learn how to do it and challenge myself. Side benefit is you can talk faster than you can type.
screenshot of a diff with content: ## Visual Consistency - Sections that present the same concept (e.g., numbered step cards) should use the same visual treatment site-wide. - The standard step/feature card style: `rounded-xl border border-border bg-white/[0.03] p-6` with number + title + description. - Don't create alternate layouts (centered open text, timeline dots, etc.) for content that's structurally the same as an existing card grid.
Here's a diff that was suggested by Claude for design rules to ensure better website design outcomes in the future. Will it work? Who knows??? Vibes-based context engineering isn't measurable!
The real lever for AI coding performance isn't just which model you pick โ it's context engineering. Your rules, your docs, your code. But without a benchmark on YOUR codebase, every change is just guesswork.
Wrote more about this: contextbridge.ai/blog/swe-ben...
Two major model releases in 24 hours. Everyone's comparing SWE-Bench scores and feeling the vibes. But you wouldn't hire an engineer based on LeetCode alone โ why pick AI coding tools that way? Your codebase is unique. Generic benchmarks can't capture that.