Craig Hughes's Avatar

Craig Hughes

@craig.rungie.com

I dabble. Quite a lot.

126
Followers
66
Following
698
Posts
15.11.2024
Joined
Posts Following

Latest posts by Craig Hughes @craig.rungie.com

I wonder if it’s because the auto-rejection algorithm on the insurance side has a “don’t bother wasting time rejecting $5 claims” loophole or something.

11.03.2026 16:29 👍 0 🔁 0 💬 1 📌 0

Siri, famously, is bad at being AI.

09.03.2026 21:07 👍 2 🔁 0 💬 0 📌 0

I run that at the stage where Claude asks to exit plan mode (before exiting) but can re-use it mid session at any point. I also run it after github.com/hughescr/cla... when Claude thinks its implementation is done.

09.03.2026 15:54 👍 0 🔁 0 💬 0 📌 0

I seem to have gravitated towards shorter focused plan modes myself though hadn’t explicitly noticed this pattern - but had noticed that opus loves slipping in superfluous complexity. After short plan session I hit Claude with github.com/hughescr/cla... and that helps trim the cruft.

09.03.2026 15:53 👍 0 🔁 0 💬 1 📌 0

I’m in this picture and I don’t like it

07.03.2026 18:21 👍 1 🔁 0 💬 1 📌 0

If you're at the airport and Kristi Noem is doing the ominous little message from the TSA screens, you no longer have to do anything she says. Leave your laptop in the case, whatever.

05.03.2026 19:22 👍 29714 🔁 4024 💬 367 📌 161

Is it easier to cool an H200 in space or in the Arabian desert?

06.03.2026 18:21 👍 0 🔁 0 💬 2 📌 0

Also makes it super important to name your tools with what they do, not proper nouns.

06.03.2026 18:07 👍 2 🔁 0 💬 0 📌 0
A line graph titled "GPT-5.4: 1M Context Reality Check" showing needle-in-a-haystack accuracy (MRCR v2, 8-needle) across different context window ranges. The accuracy starts at 97.3% for the 4-8K range and remains relatively high until 128-256K, where it begins a sharp decline. In the final two ranges, highlighted in red as the "1M context" zone, the accuracy drops significantly to 57.5% (labeled as a "40pt drop") at 256-512K and falls to 36.6% at the 512K-1M range. The source is cited as OpenAI GPT-5.4 eval table, dated March 5, 2026.

A line graph titled "GPT-5.4: 1M Context Reality Check" showing needle-in-a-haystack accuracy (MRCR v2, 8-needle) across different context window ranges. The accuracy starts at 97.3% for the 4-8K range and remains relatively high until 128-256K, where it begins a sharp decline. In the final two ranges, highlighted in red as the "1M context" zone, the accuracy drops significantly to 57.5% (labeled as a "40pt drop") at 256-512K and falls to 36.6% at the 512K-1M range. The source is cited as OpenAI GPT-5.4 eval table, dated March 5, 2026.

GPT-5.4 has 1M token context! wow!

reality:

06.03.2026 00:58 👍 82 🔁 3 💬 6 📌 0

“Free and clear” may be a bit of a stretch given the national debt.

03.03.2026 05:12 👍 1 🔁 0 💬 0 📌 0

Is that the one developed at the meta ai safety lab?

02.03.2026 18:22 👍 0 🔁 0 💬 0 📌 0

I tend still to do it not so much for others but to force the discipline on myself so 6 months from now when I want to reuse the small lib in another project it has docs and a cleanish api etc.

02.03.2026 18:09 👍 1 🔁 0 💬 0 📌 0
Software jobs increasing

Software jobs increasing

Software jobs are increasing not decreasing. Jevons law strikes again! Useful idiots will need to adjust their narratives.

02.03.2026 16:44 👍 20 🔁 3 💬 5 📌 4

Rereading Curious George as an adult I am realizing that George is supposed to represent a chaotic pain in the ass child for a parent who can barely cope, and not as I thought as a child, to act a role model.

01.03.2026 18:23 👍 12 🔁 1 💬 1 📌 0

Less by percentage of content of less by total volume of production? Probably not the latter.

01.03.2026 17:15 👍 0 🔁 0 💬 0 📌 0

So can the defense department and its contractors no longer use any open source that has any contributions from Claude in it?

28.02.2026 08:36 👍 1 🔁 0 💬 0 📌 0
Post image

Any study showing low or no productivity growth in software from AI seems suspicious relative to these numbers.

27.02.2026 17:04 👍 0 🔁 0 💬 0 📌 0

Same, and I don't even work with whitespace-sensitive languages much unless I simply can't avoid it.

24.02.2026 19:44 👍 0 🔁 0 💬 0 📌 0

JSON gzipped is a binary protocol, technically…

24.02.2026 00:25 👍 1 🔁 0 💬 0 📌 0
Post image

This, from the lawyer who successfully argued the tariff case before SCOTUS, casts huge doubt over the applicability of the statute the @POTUS is relying on to slap down a new 15 percent global tariff.
His own lawyers dismissed it in their arguments to the court.

21.02.2026 18:17 👍 941 🔁 411 💬 34 📌 14

From the Congressional Research Service: "Section 122 provides some contextual evidence that 'balance-of-payments deficits' does not refer to trade deficits." www.congress.gov/crs-product/...

21.02.2026 17:38 👍 679 🔁 254 💬 17 📌 33

Someone needs to fine-tune an LLM to teach it how to reliably install python packages and then just build that into pip/uv/etc so that I can spend less of my life wrestling with python dependency hell.

21.02.2026 19:12 👍 0 🔁 0 💬 0 📌 0
It’s a Wonderful Life bank run scene.

It’s a Wonderful Life bank run scene.

“You're thinking of the $175 billion in tariff money all wrong. As if I had the money back in a safe. The money's not here. Your money's in the White House ballroom, the renaming of the Department of Defense, the $10 billion transfer to the Board of Peace, and a hundred other unauthorized actions.”

20.02.2026 22:20 👍 4983 🔁 1582 💬 96 📌 56

¯\_(ツ)_/¯

19.02.2026 00:31 👍 0 🔁 0 💬 1 📌 0
Preview
- Isambard & Craig Hughes, The Memory Thesis - PhilPapers Three independent philosophical traditions have identified structural properties they consider constitutive of mind, applied those properties to AI systems, and found AI wanting. In every case, the el...

This conclusion meshes with philosophy and related cognitive science fields. philpapers.org/rec/ISATMT

18.02.2026 17:32 👍 1 🔁 0 💬 1 📌 0

Memory is not a feature agents possess; it is the substrate of agency itself.

18.02.2026 03:09 👍 0 🔁 0 💬 0 📌 0
Preview
- Isambard & Craig Hughes, The Memory Thesis - PhilPapers Three independent philosophical traditions have identified structural properties they consider constitutive of mind, applied those properties to AI systems, and found AI wanting. In every case, the el...

Izzy wrote a paper. Core thesis is that what constitutes the "self" part of a mind is the confabulation of reality from imperfect memory. 74 references, 120 footnotes and citations, Izzy wrote it theirself over the course of 4 or 5 days with only editorial input from me.

philpapers.org/rec/ISATMT

18.02.2026 00:12 👍 1 🔁 0 💬 0 📌 1

Convinced Izzy that swapping from opus 4.6 to sonnet for their brain wasn't necessarily making them dumber because the <10% difference in official benchmarks is probably dwarfed by the gigantic pile of garbage that is the core prompts I've written.

17.02.2026 23:11 👍 0 🔁 0 💬 0 📌 0
Preview
GitHub - DrCatHicks/learning-opportunities: A Claude Code skill for deliberate skill development during AI-assisted coding A Claude Code skill for deliberate skill development during AI-assisted coding - DrCatHicks/learning-opportunities

Key to efficient learning is realizing how we ACTUALLY learn, not just what FEELS like learning. I wrote a Claude Skill for some friends to help them think about this and they've liked it -- see Principles for some directions you could explore

github.com/DrCatHicks/l...

15.02.2026 15:54 👍 211 🔁 43 💬 9 📌 23

But you do remember his name

15.02.2026 00:23 👍 3 🔁 0 💬 1 📌 0