I wonder if it’s because the auto-rejection algorithm on the insurance side has a “don’t bother wasting time rejecting $5 claims” loophole or something.
I wonder if it’s because the auto-rejection algorithm on the insurance side has a “don’t bother wasting time rejecting $5 claims” loophole or something.
Siri, famously, is bad at being AI.
I run that at the stage where Claude asks to exit plan mode (before exiting) but can re-use it mid session at any point. I also run it after github.com/hughescr/cla... when Claude thinks its implementation is done.
I seem to have gravitated towards shorter focused plan modes myself though hadn’t explicitly noticed this pattern - but had noticed that opus loves slipping in superfluous complexity. After short plan session I hit Claude with github.com/hughescr/cla... and that helps trim the cruft.
I’m in this picture and I don’t like it
If you're at the airport and Kristi Noem is doing the ominous little message from the TSA screens, you no longer have to do anything she says. Leave your laptop in the case, whatever.
Is it easier to cool an H200 in space or in the Arabian desert?
Also makes it super important to name your tools with what they do, not proper nouns.
A line graph titled "GPT-5.4: 1M Context Reality Check" showing needle-in-a-haystack accuracy (MRCR v2, 8-needle) across different context window ranges. The accuracy starts at 97.3% for the 4-8K range and remains relatively high until 128-256K, where it begins a sharp decline. In the final two ranges, highlighted in red as the "1M context" zone, the accuracy drops significantly to 57.5% (labeled as a "40pt drop") at 256-512K and falls to 36.6% at the 512K-1M range. The source is cited as OpenAI GPT-5.4 eval table, dated March 5, 2026.
GPT-5.4 has 1M token context! wow!
reality:
“Free and clear” may be a bit of a stretch given the national debt.
Is that the one developed at the meta ai safety lab?
I tend still to do it not so much for others but to force the discipline on myself so 6 months from now when I want to reuse the small lib in another project it has docs and a cleanish api etc.
Software jobs increasing
Software jobs are increasing not decreasing. Jevons law strikes again! Useful idiots will need to adjust their narratives.
Rereading Curious George as an adult I am realizing that George is supposed to represent a chaotic pain in the ass child for a parent who can barely cope, and not as I thought as a child, to act a role model.
Less by percentage of content of less by total volume of production? Probably not the latter.
So can the defense department and its contractors no longer use any open source that has any contributions from Claude in it?
Any study showing low or no productivity growth in software from AI seems suspicious relative to these numbers.
Same, and I don't even work with whitespace-sensitive languages much unless I simply can't avoid it.
JSON gzipped is a binary protocol, technically…
This, from the lawyer who successfully argued the tariff case before SCOTUS, casts huge doubt over the applicability of the statute the @POTUS is relying on to slap down a new 15 percent global tariff.
His own lawyers dismissed it in their arguments to the court.
From the Congressional Research Service: "Section 122 provides some contextual evidence that 'balance-of-payments deficits' does not refer to trade deficits." www.congress.gov/crs-product/...
Someone needs to fine-tune an LLM to teach it how to reliably install python packages and then just build that into pip/uv/etc so that I can spend less of my life wrestling with python dependency hell.
It’s a Wonderful Life bank run scene.
“You're thinking of the $175 billion in tariff money all wrong. As if I had the money back in a safe. The money's not here. Your money's in the White House ballroom, the renaming of the Department of Defense, the $10 billion transfer to the Board of Peace, and a hundred other unauthorized actions.”
¯\_(ツ)_/¯
This conclusion meshes with philosophy and related cognitive science fields. philpapers.org/rec/ISATMT
Memory is not a feature agents possess; it is the substrate of agency itself.
Izzy wrote a paper. Core thesis is that what constitutes the "self" part of a mind is the confabulation of reality from imperfect memory. 74 references, 120 footnotes and citations, Izzy wrote it theirself over the course of 4 or 5 days with only editorial input from me.
philpapers.org/rec/ISATMT
Convinced Izzy that swapping from opus 4.6 to sonnet for their brain wasn't necessarily making them dumber because the <10% difference in official benchmarks is probably dwarfed by the gigantic pile of garbage that is the core prompts I've written.
Key to efficient learning is realizing how we ACTUALLY learn, not just what FEELS like learning. I wrote a Claude Skill for some friends to help them think about this and they've liked it -- see Principles for some directions you could explore
github.com/DrCatHicks/l...
But you do remember his name