And yet, not a single uppercase character in your post π
And yet, not a single uppercase character in your post π
Really enjoyed this talk:
* Coding vs software engineering
* Brain rot caused by AI agent over reliance
* Interactive programming
www.youtube.com/watch?v=dHBE...
LLMs are really good at intelligence cosplay.
LLM answers a question about how humans hear sound but uses the word us as if it hears sound the same way we do: What you're describing is the perceptual quality called timbre (pronounced "TAM-ber"). It's what makes a piano sound like a piano and a violin sound like a violin, even when playing the exact same note at the same loudness. Two sounds with the same fundamental frequency but different harmonic content sound completely different to us.
Hey LLM, there's no "us" here, sorry bro.
Resisting the insatiable urge to correct 10s of a 30minute YouTube video from 5 years ago.
I like this never happen option
What's a mouse?
a newer version of cython. I could probably submit a fix but the project is now dead. I wonder how many other projects collapsed along with poetry2nix? #nix #nixos
The abandonment of github.com/nix-communit... makes me sad on a regular basis. I tried jupyevn today which also seem heavily unmaintained and one of the low hanging fruit bug fixes is that poetry2nix used cython 0.29.34 as a dependency for PyYaml, but the new PyYaml and python versions require
Good article, but gosh its a lot of words.
I'm taking about the consistency and types of mistakes within a single model/version. These tools are amazing but the variance of intelligence (even in the same conversation) seem way higher than I would expect from a person.
Mine was from venice.ai but it's a fair point, you never know what configuration they used.
The types of mistakes LLMs make just make it seem like its some other kind of intelligence.
It's crazy to mix phd level intelligence with 3rd grade mistakes.
What? I tried GLM5 and it did worse than GLM 4.6
Each model has different skillsets though. That's why the benchmark are so diverse.
For coding, I'm definitely enjoying Claude
So I've heard, today's my first day π
I agree, I was saying for this question it's not any better. But I think at least in OpenAI's case the difference is smaller. They want people to be impressed by what the free model can do, so that they are incentivized to sign up.
The paid experience isn't any better, I got the same answer from GPT5.2
I wonder how much is applicable to using AI in a way that is not typically called "vibe coding".
Would there still be loss of skills if you write the code and AI reviews it. Or it only writes the tests.
Anyway, step 1 seems to be honest about how much AI was used and being accountable.
Would be a dismal future if their is a direct correlation between model performance and dishonesty/cheating.
andonlabs.com/blog/opus-4-...
Quite disappointing, I thought Ricky was better than that.
Add daylight savings adjustments to level up the nightmare.
If Melania was showing on a plane, people would still walk out
The future is software writing its own software. Which is why I'm so in love with Pi: a coding agent that can extend itself :) lucumr.pocoo.org/2026/1/31/pi/
Telnet in 2026 π€―
Have the cars been meditating till their sense of self vanished,?
What phone and or keyboard are you using?
Gboard is pretty good but there are privacy implications. When your keyboard starts predicting you take-out order, maybe that's too much?
I'm still stuck on, git works so well, why should I try anything else.
Would be very surprised if it can write an entire browser from the ground up.