Hahaha, you mean the pink overlays?
Hahaha, you mean the pink overlays?
Back on that virtual try-on shit β¨
This looks cool! Iβll be there π
Interesting to see that Rust is still harder for LLMs than other languages, and this matters on hard tasks! Itβs not automatically the best choice when you need something fast and reliable.
Yes, the performance gains are mostly Viteβs accomplishment. But even if this was a regular refactor (which Iβm pretty sure it isnβt), thatβs a lot of API to cover.
I donβt have a deep understanding of what Next.js does, but the message of βwe reimplemented the entire API surface of this heavily iterated 10 y/o project and made it 4x faster in one weekβ is just absolutely wild
Wow, I didnβt know this exists! Canβt keep up with this stuff π
I love that this exists!
From subjective experience, Sonnet now feels just as slow as Opus. Maybe not in raw tokens, but in outcomes.
Iβd love to see model labs start measuring this more.
Itβs interesting to see the variety of ways you interpret this, but youβve responded three times to this post now, FYI!
does this ever help you?
I'm afraid I'm going to be reposting this every day until the far side of the singularity.
Work on the things you want to see more of!
Work on flourishing!
Work on things that are freeing!
Maybe we should rename βman pagesβ to βhu pagesβ then π€
Wdym? Humans do too, right?
And that documentation can and often should be different depending on target audience: human or AI.
Humans need a story and intrigue, AI will grind through dry docs all day. But if it works for humans, it will almost certainly work for AIs.
This is so true and I'm happy it turned out this way. Humans + AI both benefit.
The reason why it's MORE important now is that LLMs forget everything while humans learn from trial and error. We need to compress all that learning into documentation.
Looks like Sonnet 4.6 is much less token efficient, which brings it close to Opus cost level. A bit disappointing!
This is so awesome. The fact that I can turn my expert coding agent into a really good teacher by dropping some files in a folder is just wild.
Sonnet 4.6 offers strong performance at any thinking effort, even with extended thinking off. As part of your migration from Sonnet 4.5, we recommend exploring across the spectrum to find the ideal balance of speed and reliable performance, depending on what youβre building.
Ooh, I like this direction. Iβve been experimenting with keeping thinking disabled for straightforward tasks, mainly for speed.
I really wish model labs would start posting benchmarks on speed and token efficiency. Canβt we just measure the time/tokens to complete existing benchmarks?
Especially interested to see if smarter models can be faster on certain tasks by needing fewer tool calls.
How can I see the lexicon for this? Iβd like to use it to learn about the appβs features
Letβs make some virtuous software today
Thanks, this will make me healthier π
The connection with Chess/Go players is a really interesting one to make. Does anyone have some good material about how they ended up learning and growing through it?
Theyβre mostly fine on Bluesky!
Sounds useful since I sometimes find myself digging into source code of libraries.
Not as relevant for Python projects since the source code is distributed with the package (no build step).
For TS, I wish this was solved at the language level so I could just cmd+click a function and see source.
stranger: what do you do for work?
me, shamefully: computer
For all you digital gifters, I made a heart-shaped QR code generator π
Can also be used for links or secret love notes! π€«π
(only tested scanning on iphone)
qheart.vercel.app