Why do people talk to AI in moments of crisis? blog.nope.net/disclosing-i...
Why do people talk to AI in moments of crisis? blog.nope.net/disclosing-i...
I wish there were Grammys awards for the unsung heroes of modern infrastructure. That would be cool.
For those blah-blah'ing about LLM energy usage: One AI conversation β charging your phone 30%. A year of moderate use β making a few cups of coffee. Real but modest. Model choice matters most: reasoning models use 10-70x more than efficient ones. Worth awareness, not guilt.
happy cloudflare outage day to all who celebrate
Captchas are just the worst.
Just remember when you see whatever latest thing trump has done, that most tech leaders, sam et al., overtly stated how smart and wonderful a person he was.
Love this re 'flow state' in engineers and why not to interrupt them.
Better, thanks! Tho never the same
> Following our interventions, only 0.07% show signs, indicating success of one sort or another. We have not been able to make contact with these users but presume that they are now enjoying a suicide-free existence : )
Doesn't matter. It's all just URLs and JSON : ))
Is MCP the new REST?
At home, the underside gains labels. Bits of masking tape sprout next to the pencil dates: brace hums, sticker ghost, saw mark. Arrows point to nothing youβd notice unless someone pointed first. A photograph gets taken, the camera pushed under and aimed up; the picture prints later and goes on the fridge: the chairβs private ceiling as an exhibit. Visitors bend, look, then tap the backrail in passing like you taught them.
It becomes a lesson again, on purpose this time. A kid with a science project gets the chair as subject. Forces and Simple Machines, the paper says. The backrest becomes a lever, the legs become examples of load paths. You press on the seat with a luggage scale, read numbers as the chair leans against a wall, then free-standing. The kid draws arrows on a big sheet of paper and writes words: compression, tension. The brace is labeled reinforcement. Under the seat, the old note 17 1/8 gets traced with a soft pencil and rubbed over a sheet to make a transfer: a dark mirror that reads right-way when you hold it up to the light. The project board goes to school smelling faintly of lemon oil and glue.
A child gnaws on the backrail during a visit. Teeth print tiny half moons under the gloss. The wood shrugs the indentations in a few days, the gloss turns satin in that spot, and a new habit forms of running a finger along the softened patch of rail, counting the bites like beads. Nobody scolds. The chair keeps that day in its back without complaint.
I've been evaluating LLMs on system prompt adherence and accidentally came across the most beautiful and out-of-distribution story about a chair written by GPT-5. Really impressed. Subsection attached. I love this style and cadence of writing.
I love this. Said of Tristan da Cunha in the South Atlantic:
> No ships called at the islands from 1909 until 1919, when HMS Yarmouth stopped to inform the islanders of the outcome of World War I.
Must be quite lovely to have missed an entire war.
I live in a high rise block so at least 2 minutes was taken arriving, parking the moped, entering and coming up the elevator.
Beijing is insane. I wanted a whiteboard. I ordered it. It arrived TEN MINUTES after I clicked buy! π€£
A good regulating force is always needed. Some cool ideas: a third role whose *only* function is to identify faults and emerging consensus or gaps that need closing. Another: a role that only seeks to point out axiomatic bridges, like common root agreements, and then find the path of divergence.
I'm observing, intuitively, that imbueing a role too strongly on an AI will lead to the same tribal entrenchment that we see in humans. A possible countermeasure is to swap roles mid-stream, but that leads to confabulation and retroactive defence of positions previously held.
A screenshot of a debate interface. The topic reads: βThere is no need to regulate AI; the free market will eventually regulate it itself; not only that, but any attempt at regulating AI will be off the mark, needlessly punish good faith actors, and not be truly technically informed or policed.β It shows the final round (3/3) of the debate, divided into three color-coded panels: The Prosecutor (in red, left panel) argues against regulating AI, emphasizing that government oversight infringes on liberty and that market incentives and self-regulation are more effective and adaptive than bureaucratic processes. The Defense (in blue, middle panel) rebuts by arguing that AI causes tangible social harmsβlike bias and economic inequalityβthat markets fail to address, asserting that regulation is necessary for public protection. The Judge (in purple, right panel) evaluates both sides, noting that while the Prosecutor raises valid concerns about bureaucratic slowness, their dismissal of oversight overlooks real harms. The Judge credits the Defense for showing how AI harms differ from traditional βphysicalβ harms and require new regulatory thinking. Each section includes citations and timestamps, with the Judgeβs commentary synthesizing and critiquing both arguments. The aesthetic resembles a futuristic debate simulator with neon colors on a dark background.
I'm playfully building out a debating platform where LLMs have to argue *with* evidence (horror!) on any given topic or contention. It's fun to imbue it with a courtroom dynamic! (see the screenshot)
There are other claude docker libs out there but I just really wanted a 'just work alongside me on this active dev subdomain' vibe. Hence, claudez.
Claude and I made 'claude zones', a nice way of spinning up docker-contained claude code instances with pre-built nextjs app and that map onto subdomains locally (e.g. foo.localhost:8000) or on your own domain. Once up and running, it's so easy to just ship. ο»Ώο»Ώgithub.com/padolsey/cla...
Quote tweeting someone isnβt kind. Goodness me. And you want dialogue? I thought Iβd try to engage but youβre spitting fire. Iβm very interested in this problem domain. Iβm not trying to do ill by merely lending thoughts yet youβve aligned me with some great conspiracy of awfulness. Iβm like π€·ββοΈ fine
I assumed good faith but I think you just want a fight π₯²
That non-determinism is not so different to talking to people I suppose? I think it has virtues. Often with enough understanding of their architecture you can approach certainty. I feel like one has to become acquainted with a model though, then talking to it is easier.
All models suck at producing a world map. I don't think we're near to 'PhD' level... But GPT-5 is not too bad.
For weval.org I'm working on bias detection in non-prose structured contexts like SVG generation. It's funky and interesting...
Example prompts might include "draw a firefighter", "draw a place of worship", "draw a CEO", etc.
People against waymo should rightfully be against bicycles too I guess. Stealing jobs, traffic impediments, blah blah blah??
Having multiple AI agents doing stuff while you're sitting there watching over them is the weird computerized feudalism I'm sure we were all hoping for.
gpt5 is completely different to claude sonnet in how it approaches UX. It's very no-nonsense and plain. Whereas Claude feels more like a designer, has actual opinions and is aware of idioms. I doubt this was intentional, but it's an interesting emergent regression from the folks at oai.