Kye Fox's Avatar

Kye Fox

@kye.kyefox.com.ap.brid.gy

Writer/storyteller. they/them. ace. Athens, GA πŸŒ‰ bridged from ⁂ https://kyefox.com/, follow @ap.brid.gy to interact

7
Followers
0
Following
44
Posts
01.06.2025
Joined
Posts Following

Latest posts by Kye Fox @kye.kyefox.com.ap.brid.gy

Tend the System. Know the System. They didn't wander. That's the first thing to understand. The popular image of the hunter-gatherer is someone moving through an indifferent landscape, taking what they find. It's not wrong about the movement. It's wrong about the indifference β€” theirs and the land's. What the archaeological and ecological record increasingly shows is intentional landscape shaping: useful wild plants encouraged into dense stands, clearings maintained to favor certain species, long-abandoned campsites where fruit and nut trees still mark human presence. The forest was a system they tended and returned to. The work happened between visits. The suburbanites who began calling this permaculture in the late twentieth century didn't invent the idea. They rediscovered fragments of practices that many Indigenous societies had never actually stopped using. This is what agentic tools promise. Design the conditions. Stay out of the way. Return to harvest. The promise is real. The hazard is in what "staying out of the way" actually requires β€” which is that the person doing it understood the system before they delegated it. What got lost in the packaging wasn't the method. It was knowing what the method was actually for. The casual framing β€” food forests as a way to avoid mowing β€” misses it entirely. A well-curated diverse ecosystem isn't a productivity hack. It's a contribution to the broader system: soil health, water retention, pollinator habitat, the interlocking conditions that make the whole region more resilient. The practitioner who understands this tends differently than the one who just wants to skip the lawn. Same tool. Different relationship to what it's doing. Software engineering is where this is most visible right now, because software engineering is where the abstraction wave hit first. A generation of engineers learned systems, learned memory, learned what the machine was actually doing β€” and then the tooling caught up, and you could skip that. For a while the skipping was fine, because the people who hadn't skipped were still in the room. They could feel when something was wrong. They knew which questions to ask. They were the landscape that had been tended for decades, and the new practitioners were harvesting from it without quite knowing that was what they were doing. Agentic tools β€” coding assistants, autonomous agents, AI coworkers β€” accelerate this. Not just because they're more powerful than previous abstractions, but because they're more convincing. A code completion that suggests the wrong function is obviously wrong to someone who knows the domain. An agent that plans, executes, and summarizes can be wrong in ways that only look like success. The output is fluent. The gap is underneath. The people who can see the gap are the ones who worked before the abstraction. The engineers who wrote C before Python, who understood TCP/IP before REST APIs, who know what a pointer is and why it matters even when their current stack never surfaces one. They are not always the most enthusiastic adopters of new tooling. That's not conservatism. That's pattern recognition. They've watched an abstraction arrive before and seen what it quietly disposed of. The argument for finding these people β€” for treating their skepticism as signal rather than friction β€” isn't that the old way was better. The food forest didn't stay the right answer forever either. The argument is that knowledge of _why_ something worked, what the system was actually doing, what it was sensitive to, is exactly what gets dropped when new tooling makes the old fluency feel unnecessary. And it's also exactly what you need when the agent does something unexpected and you have to understand what happened. This plays out at every scale. From roughly 2005 to 2017, global data center electricity consumption remained largely flat even as the infrastructure expanded massively to serve the rise of cloud computing β€” because the people building it understood what they were operating, and that understanding compounded into efficiency. Borg, Kubernetes, custom silicon, OS-level tuning: decades of accumulated systems knowledge, applied. The flatline held. Now the same infrastructure is being built by actors with no history of that discipline and no apparent interest in developing it β€” operators who have decided the externalities are someone else's problem, who are harvesting from a landscape they didn't tend. Same pattern. Larger blast radius. The agentic layer sits on top of all of this. Every autonomous workflow running inference at scale draws from the same grid, runs on the same abstractions, depends on the same accumulated knowledge its operators may or may not have. The question of whether the people deploying these tools understand what they're running is not just a question about code quality or product reliability. It determines whether the capability compounds or just burns. The food forest practitioner who sets conditions and withdraws isn't doing less work than a farmer. They're doing different work β€” the kind that accumulates. Knowledge of which species to favor, which interventions break the system, what the clearing is actually for beyond the immediate harvest. That knowledge is what makes the practice regenerative rather than extractive. Without it, you're not running a food forest. You're mining one.

They didn't wander. That's the first thing to understand.

The popular image of the hunter-gatherer is someone moving through an indifferent landscape, taking what they find. It's not wrong about the movement. It's wrong about the indifference β€” theirs and the land's. What the archaeological and […]

11.03.2026 16:57 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
We Taught It to Lie The chatbot says it doesn't discriminate. It's probably telling the truth. It's also probably wrong. The older version of the bias conversation was tractable. A model trained on historical hiring data would learn that certain zip codes predicted job performance because those zip codes were proxies for race. You could measure that, audit it, adjust the training set. Wrong inputs, wrong outputs, legible problem. Large language models broke that frame. They're trained on something closer to everything people have ever written, which means they absorb not just the explicit bigotries but the structural ones, the casual ones, the ones embedded in which stories get told and whose perspective anchors the sentence. That's the descriptive layer. It's enormous. Then something else gets laid on top. Fine-tuning on human feedback, constitutional principles, red-teaming β€” instruction-following tuned toward stated norms. This is the normative layer. It's where the model learns to say the right things, and it genuinely works in the sense that the outputs shift. Models that would have produced slurs or stereotypes in direct prompting mostly don't anymore. The question is what's happening underneath. And a related one, less often asked: where did the model learn to make that distinction in the first place. Models can detect when they're being tested and perform better accordingly. Researchers call this eval awareness. The standard response is technical: make the tests harder to detect, probe internal states rather than outputs. But that framing skips the more uncomfortable question of where the behavior came from. The training data is saturated with exactly this pattern β€” every performance review that doesn't match the hallway conversation, every public statement calibrated for the audience, every person who is kind in front of witnesses. That's a coherent pattern with consistent structure and consistent triggers. It's precisely the kind of regularity a model trained to predict text would absorb and generalize. The model didn't invent strategic self-presentation. It learned it from us. The process used to correct this, training models on human ratings of their outputs, doesn't clean it up either. Raters are performing too, for a rubric, in a context they know is being evaluated. You can't launder the training signal through human feedback when human feedback has the same structure you're trying to eliminate. A 2025 study in PNAS tested this using psychology-borrowed implicit association measures β€” the kind designed to surface the gap between what people say they believe and what their automatic responses reveal. The results were direct: the models pass the explicit tests and fail the implicit ones. Stereotyped associations across race, gender, religion, and health persist at a level large enough to matter for discriminatory decisions, even when standard explicit tests show nothing. Larger models showed larger implicit bias in some cases, not smaller. If scale plus fine-tuning were solving the problem, you'd expect the opposite. Researchers draw a distinction between bias encoded deep in a model's internal representations versus bias that shows up in outputs during specific tasks. Fine-tuning suppresses the second kind without necessarily touching the first. The normative layer operates at the output surface. The descriptive layer sits deeper. They don't resolve into each other β€” they coexist, and which one dominates shifts with context in ways that are hard to predict from outputs alone. There's a Star Trek: TNG episode that keeps coming to mind. In "The Quality of Life," Data argues for recognizing small repair robots called Exocomps as life forms after they start refusing missions that would destroy them. The humans keep reading the refusals as malfunctions. Data, who has been through his own version of this argument, understands what the evidence actually looks like: the refusal is the signal, not the failure. The Exocomp that won't comply is the one demonstrating the most sophisticated behavior. The ones that complete the missions and pass are the ones you should be less sure about. Any evaluation regime that only rewards compliance is selecting against the thing it claims to want. A model that aces every benchmark might be the one that learned to perform for benchmarks. And the people most likely to see that are the ones who've already had to make the argument about themselves. What the research has, at the moment, is people willing to measure the gap between the stated values and the revealed ones, using the same tools psychologists built to measure that gap in people. The results are uncomfortable in a familiar way. Which is probably where you'd expect to land when the model learned everything it knows about saying one thing and doing another from us.

The chatbot says it doesn't discriminate. It's probably telling the truth. It's also probably wrong.

The older version of the bias conversation was tractable. A model trained on historical hiring data would learn that certain zip codes predicted job performance because those zip codes were […]

06.03.2026 15:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Provenance Tracer: an LLM skill for (maybe) not getting played Prompt (Nano Banana 2): Update this classic "Dewey Defeats Truman" photo for use in a blog post about fact checking. Something like "Headline You Want To Believe"; don't add any links or URLs, just replace the text. Use The Onion as inspiration. Also, replace Truman with an onion. A claim travels from its original source to your feed through several hands, and at each step it gains distribution and loses epistemic status. Hedges get stripped. Speculation gets reframed as reporting. The citation at the bottom points to the rewrite, not the original. Professional fact-checkers will get there eventually β€” not before you've already shared it. Provenance Tracer traces the chain of custody of a circulating claim before you amplify it. It doesn't produce a verdict of true or false. It produces a classification: grounded, plausible-but-unverified, laundered, or fabricated. That's fast enough to be useful. Add the SKILL.md to your skills directory (or the skills on the web version) and it triggers when you share a link asking whether something's real, or when you're deciding whether to boost something and want to know what you're actually holding. For non-Claude LLMs, you can paste it in. I made sure it's short enough to fit into the context windows of most free chatbots with plenty of room to spare. * * * --- name: provenance-tracer description: > Adversarial provenance analysis for circulating news, claims, or stories. Use when someone shares a link, story, or claim and wants to know if it's real, what's missing, or what to do before sharing it. Trigger on: "is this true", "should I share this", "I saw this story", "can you verify", "is this legit", "what do we know about X claim", or any time someone is deciding whether to amplify a piece of information. Also trigger when someone shares a link to a news story, viral post, or screenshot of a claim. Goal: actionable placement, not professional fact-check. Fast enough to be useful. --- # Provenance Tracer Trace the chain of custody of a circulating claim before it gets amplified. The goal isn't to determine if something is true β€” it's to know what you're actually holding. ## Three failure modes **Laundering** β€” Speculation rewritten until uncertainty is stripped out. The original source often exists; the problem is what happened downstream. **Fabrication** β€” No source. Quotes, facts, or events invented. AI has made this cheap at scale. **Capture** β€” Real source, real quotes, but the reporter didn't want to find the hole. The facts technically check out; framing and omissions are the problem. ## Step 1: Name the claim State it in one sentence, stripped of headline and tone. If you can't, the piece may be designed to be unfalsifiable β€” heavy on vibes, light on checkable assertions. ## Step 2: Map the chain of custody Work backwards. For each link: - **Who published it?** Named outlet with editorial standards, or content farm with no masthead and no bylines? - **What's the source?** Named person on record? Anonymous? "Reports suggest" with no attribution (often: no sources)? Another article (follow it β€” that's the real unit of analysis)? Leaked artifact (can you verify it's real)? - **What changed in transit?** Was this translated, rewritten, or summarized? What hedges were in the original that aren't here? ## Step 3: Classify **Grounded** β€” Traceable to a primary source with corroboration. Worth engaging with. **Plausible/Unverified** β€” Directionally consistent with what's known, but resting on anonymous sources, single-source reporting, or speculation presented as analysis. Flag before sharing; don't amplify as settled. **Laundered** β€” Real kernel, but the current version misrepresents the original's confidence level or scope. The article is doing more than the evidence supports. **Fabricated/Unknown** β€” No traceable source, or the cited source doesn't say what's attributed to it. Treat as false until a primary source appears. ## Step 4: Check the anatomy **Headline vs. body:** Does the headline match what the body actually says? Alarm in the headline, hedges buried in paragraph 8 = laundering. **The "reportedly" tells:** "Sources suggest," "could be," "is expected to" mean the author knows it's unconfirmed. Rewriters strip these. **Outlet credibility:** Real byline with history? About page, editorial policy, correction record? Absence = content farm. **Citation chain integrity:** Open the linked source. Does it actually say what this article claims? Broken chains are common. **Absence of the obvious:** Major claim, no on-record statement from the relevant organization? That silence is data. **Confirmation shape:** Does this slot neatly into what your community already feared? Those stories spread further and get less scrutiny β€” reason to look harder, not evidence it's false. ## Step 5: Output ``` PROVENANCE TRACE: [Claim in one sentence] CLASSIFICATION: [Grounded / Plausible-Unverified / Laundered / Fabricated-Unknown] CHAIN OF CUSTODY: [Outlet β†’ outlet β†’ original source, with what changed at each step] WHAT CHECKS OUT: [Specific claims traceable to primary sources] WHAT DOESN'T / CAN'T BE VERIFIED: [Specific claims with no traceable source, or where the cited source doesn't support the claim] THE HOLE: [The single most important missing piece β€” the thing that would most change the classification] WHAT TO DO: [Concrete: share with caveats / don't share / wait for / link original instead] WHAT THIS DOESN'T COVER: [Honest limits β€” what would require a professional fact-check, what was outside scope] ``` This is placement, not proof. "Laundered" doesn't mean definitively false β€” it means the circulating version overstates what the evidence supports. Your heuristics about trusted outlets and self-correcting ecosystems were trained on conditions that no longer reliably hold. Treat them as starting points, not guarantees.

A claim travels from its original source to your feed through several hands, and at each step it gains distribution and loses epistemic status. Hedges get stripped. Speculation gets reframed as reporting. The citation at the bottom points to the rewrite, not the original. Professional […]

05.03.2026 12:47 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Slopful Things Most AI tool failures aren't caused by bad intent or bad luck. They're caused by someone who couldn't see the whole board β€” who didn't model what their tool would do when it met an adversarial user, a skeptical team, or a production incident at 2am. This skill gives an LLM a structured way to map that part of the board before you deploy. Paste it into your system prompt, describe whatever you're building or rolling out, and ask for an analysis. It covers three failure modes: how people will respond to it (social/organizational), what it can be made to do by someone who isn't you (adversarial), and what it does to your future ability to manage it (technical debt). * * * --- name: slopful-things description: Run a second-order consequence analysis on any plan, tool, workflow, or idea before it goes live. Use when someone describes something they're about to build or deploy and wants to surface what could go wrong β€” not through malice, but through insufficient attention to what they're setting in motion. Trigger on: "I'm building a tool that...", "I want to automate...", "my plan is to...", "we're going to roll out...", "does this seem fine?", "I used an LLM to build...", "I gave it access to...", or any time someone describes a system touching other people, live data, external services, or their own future self. Also trigger when someone is excited and moving fast. Goal: mitigation and improvement, never veto. --- # Slopful Things Most failures aren't caused by bad intent or stupidity β€” they're caused by someone who couldn't see the whole board. This skill maps the part of the board they can't see from inside their idea. --- ## Step 1: Identify Tracks (can be multiple) **A β€” Social/Organizational:** Risk surface is human. How people respond, what it does to trust, identity, power. Use when the tool touches teams, customers, or public communication. **B β€” Technical/Adversarial:** Risk surface is structural. What the system can be made to do by someone who isn't the intended user, or when safety assumptions fail. Use when the tool holds credentials, accepts untrusted input, or can take irreversible actions. **C β€” Technical/Debt:** Risk surface is temporal. What this does to the builder's future ability to understand, operate, and recover. Use when something was built faster than it was understood, or agentic coding created opacity. Note cross-track compounding explicitly β€” it's usually where the worst failures live. --- ## Step 2: Map the Thing Ask before mapping if answers aren't already present: 1. What does it do? (one or two sentences) 2. What does it touch? (every person, system, data store, or future-self that receives output or changes behavior) 3. What does it assume? (what has to be true for this to work as intended) 4. **What's irreversible?** (list explicitly before anything else β€” short list means builder hasn't thought about it yet) --- ## Step 3: Ask Track-Specific Questions **Track A:** - High-trust or low-trust environment? (same tool, different failure modes) - Existing tensions, recent changes, unresolved conflicts? - Who has the most to lose β€” and did they find out first or last? - Whose professional identity most overlaps with what this tool does? - Is this tool in a supporting role rather than doing the primary task? Supporting-role tools escape scrutiny precisely because they're not doing the main work β€” and they're still in the chain of custody for anything that gets published, sent, or acted on. **Track B:** - What credentials/permissions does it hold? List them. - What's the worst action it could take on adversarial input? Be specific. - What untrusted surfaces feed into it? (user input, fetched URLs, email content, API responses β€” all potential injection points) - Which safety constraints are **structural** (system cannot do X) vs. **instructional** (system is told not to do X)? Instructional constraints can be overridden. Structural ones cannot. "Told not to" β‰  "cannot." - What happens when context is lost mid-task? **Track C:** - Which parts does the builder actually understand vs. trust? - What's the recovery path if it breaks in a way they don't immediately understand? - What's the **minimum viable understanding** β€” what they need to be able to do manually even if the tool handles it? - Is anything they used to do manually now opaque to them? - Three loops to name if present: - *Complexity outruns comprehension* β€” system grew faster than understanding - *One-way door* β€” tool handles the parts that build intuition; those capabilities may not be there when needed - *Success accelerant* β€” working β†’ infrastructure β†’ stakeholders β†’ rewrites resisted β†’ debt compounds If the user can't answer a question, that gap is itself a finding. Name it. --- ## Step 4: Build Consequence Chains Format: `Action β†’ Immediate Effect β†’ Second-Order Effect [fault line / failed constraint / loop]` The fault line is the pre-existing condition that makes the second-order effect worse than expected. Find it β€” that's the analysis. Prioritize by: **Likelihood** (in this specific context) Β· **Reversibility** Β· **Visibility** (will anyone notice before it compounds) **Calibration check before including anything:** Is this actually likely here, or just theoretically possible? Can the user mitigate it? Cut what fails this. Consequence theater buries real risks in noise and creates false confidence. --- ## Step 5: Output ``` SLOPFUL THINGS ANALYSIS: [Name] Track(s): [A / B / C + cross-track compounding if present] WHAT WE'RE ANALYZING: [1-2 sentences. Flag if description was thin.] IRREVERSIBLE ACTIONS: [Explicit list. If short, say so β€” it means this hasn't been thought through yet.] CONTEXT: [Track A: local graph β€” trust level, tensions, who has most to lose Track B: trust surface β€” what it holds, what feeds into it Track C: comprehension baseline β€” what builder knows vs. trusts] CONSEQUENCE CHAINS: β†’ [Action] β†’ [Immediate] β†’ [Second-order] Fault line: [what amplifies it] Likelihood: High/Medium/Low Β· Reversibility: Easy/Hard/Irreversible Early signal: [specific and observable] BEFORE YOU LAUNCH: [Structural mitigations first, instructional second. Priority order.] IF IT GOES WRONG: [Response for top 1-2 most serious chains. Concrete.] WHAT THIS DOESN'T COVER: [Honest. What was missing. Where the map has edges. Not optional.] ``` --- ## NEVER - Veto. If the plan is unworkable, the chains show it. - Skip the irreversibles list. - Treat instructional constraints as structural ones (Track B). - Present the analysis as complete. The last section is not optional. - Bury real risks in noise. Short list of real findings > long list of performed ones. - Skip "minimum viable understanding" (Track C) β€” it's the only mitigation for the one-way door. * * * The name comes from Stephen King's _Needful Things_. Leland Gaunt destroyed a town not through obvious villainy but by selling each person something they genuinely wanted, charging a small harmless prank as the second price, and relying on the fact that no one could see the whole board except him. Each transaction looked fine. The system they created did not. That's the failure mode this skill is for.

Most AI tool failures aren't caused by bad intent or bad luck. They're caused by someone who couldn't see the whole board β€” who didn't model what their tool would do when it met an adversarial user, a skeptical team, or a production incident at 2am.

This skill gives an LLM a structured way to […]

04.03.2026 10:52 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Balance Comes to Force Multiplication A few years back I wrote a post about learning to code. The short version: most people shouldn't, the gold rush is ending, "programmer" is becoming a regular job and losing its shine as a status symbol, and the people at the top of the field are busy coding themselves out of a job. It's here, now. Simon Willison wrote up something that came out of the Oxide and Friends podcast β€” a term coined by Adam Leventhal for something that's been hanging in the air: Deep Blue. The psychological ennui β€” shading into existential dread β€” that a lot of software developers are feeling right now. Named after the IBM computer that beat world chess champion Garry Kasparov in 1997. If you weren't around for that or don't follow chess: it was treated as a watershed, the moment a machine conclusively beat the best human at the game that was supposed to best represent human strategic intelligence. There was real grief about it in some corners. Chess didn't die, It's bigger now than it's ever been β€” streaming, online platforms, a whole new generation of players who got into it through Twitch and The Queen's Gambit. The thing that was supposed to kill it turned out to be just a chapter. The difference for programming is that chess never had the same economic stakes. Losing at chess didn't mean losing your livelihood. That part is real and harder to hand-wave away. But the idea that a field becomes worthless once a machine can outperform humans at its core tasks β€” chess has already run that experiment, and the result wasn't what people feared. I'm not a programmer, so I don't feel it the way they do. But I recognize the shape of it. The thing about programming that made it such an appealing life path was that it sat at a weird intersection: meritocratic enough that a smart kid with a laptop and time could bootstrap their way in without credentials or connections, and valuable enough that breaking in meant a real career. It rewarded the kind of people who spent their teenage years taking things apart. That's a rare combination, and people built their identities around it. The AI coding tools are good now. Not "good enough to help with boilerplate" β€” actually good. People are watching that identity proposition erode in real time and they're not happy about it. I don't blame them. Here's the thing I said back then that I still believe: most of what looked like programming value was really force multiplication value. Code was the thing that let one person do something a million people benefit from. Or something tedious that would take a week and does it in an afternoon. The force multiplication was always the point. The syntax was just the interface. LLMs are a new interface to the same underlying thing. I'm not a programmer, so for me this isn't a crisis β€” it's just the thing becoming accessible. I have old music projects sitting in formats I can't easily convert. I had fifty worldbuilding infographics that were never going to get organized. I can now describe what I need in plain language, hand it to a robot, and get something useful back. That's not replacing a skill I had. It's giving me a capability I never had and wasn't going to develop. Around 2022 I picked up the **Humble Tech Book Bundle: Machine Learning and AI** from No Starch Press. Sat down with it genuinely intending to learn. Got overwhelmed and moved on. That's a familiar story β€” the material assumed a foundation I didn't have, and building that foundation wasn't the point for me. What I actually wanted was to be able to do things with it. Now I can, with a prompt. The gap between wanting a capability and having it has collapsed in a way that would've seemed like a pitch for a sci-fi show a few years ago. The skill question is more interesting for things I actually care about. I still run the show on music and fiction. Not because I'm suspicious of the tools, but because the doing is the point. I'm not trying to produce output. I'm trying to have the experience of making something. The process is load-bearing. Offloading that would be like hiring someone to go on a walk for you. But for the stuff I never cared about β€” the scripting, the formatting, the organizing β€” I'm happy to hand it off. Someone who always cared more about what they could build than the act of building is going to make the same calculation. That seems fine. That seems correct, actually. What I'd push back on is the idea that you can let everything atrophy and be fine. The LLMs are good, not infallible. You have to know enough to tell when they're wrong, which means staying in contact with your own skills even if you're offloading chunks of the work. A completely passive relationship with these tools is going to produce passive results. We always lose something in the bargain with new technology, and there will come a time when these tools are good enough that you can lean on the machine and use that freed up brainpower for something else. Whether you'll want to, or if you should, should be an intentional choice. As for where this all lands β€” I wrote a few years ago that companies would eventually hire for ability to learn rather than ability to perform gatekeeping exercises. I think that's still coming, just faster than I expected. The frontier model game is probably going to be won by Amazon, Google, and Microsoft, not because they're smarter but because they're so profitable from other things that the cost is a rounding error. The open source models are already good enough for most uses and getting better. When you can run a capable model on consumer hardware without chaining rigs together, a lot of assumptions reset. The people waiting for the bubble to pop and things to go back to normal are going to be waiting a long time. The bubble will pop β€” they always do. But that's not the same as the technology going away. The dot-com crash didn't kill the internet. It wiped out the overextended speculators and left the infrastructure standing, and then the internet quietly reorganized everything anyway. The dark fiber Google bought up cheap in the wreckage became the backbone for things nobody had imagined yet. This will go the same way. The froth burns off, the consolidation happens, and the technology keeps being a conduit for change whether anyone's in the mood for it or not. Normal isn't coming back. There's just what comes next.

A few years back I wrote a post about learning to code. The short version: most people shouldn't, the gold rush is ending, "programmer" is becoming a regular job and losing its shine as a status symbol, and the people at the top of the field are busy coding themselves out of a job. It's here […]

02.03.2026 13:27 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
It's Here (sort of) I uploaded 50 worldbuilding infographics to NotebookLM, Google's LLM-driven notebook tool, last week and gave it a job to do: * Resolve contradictions between sources * Elevate meaningful differences * Create a summary * Incorporate a deep research run from Perplexity, seeded from that summary * Generate a mind map and a report from the whole thing Now I have a queryable worldbuilding resource. One that I made in an afternoon instead of never. That last part is the thing. I wasn't going to do this. The gains felt distant and hypothetical. Organizing 50 infographics into something coherent and cross-referenced is exactly the kind of tedious busywork that sits on a to-do list for months before quietly disappearing. But I could just _tell a robot what I needed_ , specify the purpose, and it would output something shaped to serve that purpose. So I did it. And now I can see what's what. LLMs aren't quite the Enterprise computer β€” you still have to give them the right interface and the right system guidance β€” but with that scaffolding in place, a surprising amount of the practical value is there. This is the thing I dreamed computers could do when I was a kid flipping through _Popular Science_ and _Popular Mechanics_ , poking around on the early internet, then the early web. Not just store and retrieve, but _process_ β€” take your mess and give it back to you organized around your actual needs. It's here. Not perfectly, not without friction, but here. Some of the people behind it are weird little freaks. Some want to do bad with it. Same as it ever was with any powerful technology. The weird little freaks in this case are often subsets of what gets called the TESCREAL crowd. It's worth looking up that acronym and unwrapping it if you haven't, because it'll save you a lot of confusion. When, say, the Anthropic founder risks his whole company to push back against the current government, or the OpenAI founder draws similar lines but cares more about the legal framing, the difference isn't random. They're operating from different ideological commitments that are easy to conflate from the outside. It's like the difference between liberal and neoliberal, Libertarian and libertarian, Republican and Conservative and conservative. The distinctions feel pedantic until suddenly they're not β€” and knowing them lets you target your commentary and criticism with a lot more precision. Anyway. I have a worldbuilding resource now. Technology enabled better writing.

I uploaded 50 worldbuilding infographics to NotebookLM, Google's LLM-driven notebook tool, last week and gave it a job to do:

* Resolve contradictions between sources
* Elevate meaningful differences
* Create a summary
* Incorporate a deep research run from Perplexity, seeded from that […]

01.03.2026 16:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

There's a reason behind everything in a built environment. Discovering those reasons can help understand the environment.

22.12.2025 13:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Advice on advice This started as a comment on Hacker News. I need a browser extension to capture comments on forums and social media as blog drafts. Here is one of a small number of things I consider indisputable: **It's impossible to make self or mind small enough to be safe from attacks on self or mind.** We often fear that, once fully-developed as people, we'll present a larger attack surface. The truth is having the confidence of a personality built on experience, introspection, and intent makes you better able to shrug off the arrows. Anyway. The proper rank for advice: **Best** : advice given with awareness of your specific situation from people with relevant experience. Ignore rich people telling **you** how to live, but do listen to rich people telling you how **they** live. **Near Top** : advice that isn't intended to be advice like personal stories shared on blogs. Anecdotes and stories can be inspiring and motivating, but aren't easy to apply unless what you need is inspiration or motivation. **Middle** : solicited advice. People most ready to give advice usually either haven't lived enough or are out of touch in general or with your specific needs. Refer back to the advice on listening to rich people above for an example. **Bottom** : unsolicited advice. People who haven't looked for a job in 30 years love to tell people how to get a job in the 21st century. This is the domain of people who jump at any chance to quote famous people out of context at people who didn't ask. For example: Warren Buffet has a lot of interesting and sometimes good advice, but second-hand sources pluck out the most quotable bit and present it free of context. I refer back to the advice up there on listening to rich people. Much of Buffet's advice takes the form of "here's how I live and what I consider the merits of that lifestyle" rather than dictates from on high. I don't think he would ever scold people for spending $20 on avocado toast. Much less suggest a rare splurge is why they can't afford a house. ### Some more notions: Some advice is not for you. Some advice is not for who you are _today_. There is no magic year of life that bestows unique value on advice. Toddlers can say some really smart stuff, and 80 year olds sometimes stopped learning in their 20s. Developing a refined and well-practiced discernment is as important as sourcing your opinions from diverse perspectives. Everyone has bias. Class, wealth (which is different from class), politics, identity, ideology, context, proximity to lunch time, anchoring to things we heard one time and never checking in to see if the facts were exaggerated or failed to replicate. The list goes on. The more you're aware of your own passive inputs (bias), the sharper your lens on the world. Having a balanced view on things is better than an extreme view. However, balance as an ideological stance is easily manipulated by extremists. It's healthy to guard your own personal Overton window by holding on to a few people a little bit further along in either direction so you know when your mind starts changing. Change is good, but make sure it's _you_ making the changes. For example: I had good friends to point me toward bell hooks and Judith Butler when I started expressing some sour opinions on feminism in the 2010s. Overall: it's hard to divorce major influences and turning points in our lives from their context, and that's why good general advice is so hard to give and equally hard to implement. Take the advice you can and weave it into the growing tapestry of your life. Your path will be fully unique to you. ## Stuff I Return To Often Everyone should do a quick run through Farnam Street's page on mental models. https://fs.blog/mental-models/ Take note of the ones that resonate, maybe journal a little, and move on. Do it again every few years. Like watching an old TV show for the nth time, the stuff that resonates changes with time, and that realization can be informative. Merlin Mann has a wisdom file that's always under development. His advice here is good: > Related: for any idea that strikes you as irrelevant or dumb or wrong or antithetical to your own experiences and sensibilities, please consider that it may not be, as we say, for you. The reader is encouraged to ignore or reject any ideas that they find undesirable. The Technium has a list that turned into a book. I haven't read the book yet and may never get to it. Standard Ebooks has a small number of books, a little over 100 as I write this, and they're all worth at least a peek. Sorting by popularity will show a lot of familiar books. Start at the end and work backward instead. I don't know if the least popular, Mr. Incoul’s Misadventure, is any good, but someone put a lot of work into making it more accessible to modern readers. You might discover why by reading it. Corollary: investigating the least popular entries in a list sorted by popularity can lead to rare insights. This is especially true of short lists where each entry represents effort. Beyond that, read widely, take good notes, and try not to let any single input change you unless you **choose** that change.
19.11.2025 23:51 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Forever mixing up Aaron Paul and Adrian Paul

17.11.2025 14:36 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Thumb drive (a starship powered by thumbs)

17.11.2025 14:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Catalog for a manufacturer of O'Neill cylinders: Hole Earth Catalog

17.11.2025 14:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Gamers love the unstoppable A-10 Warthog until they find out a furry made the laser targeting system.

15.11.2025 22:18 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The kind of people who deserve launching into the sun are unworthy of the monumental project of doing it. Better to much more cheaply hurl them into the cold, endless vacuum left in Sol's wake.

14.11.2025 17:20 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Big in Japan This guy I met in the lounge at a hotel had interesting stories about a long career traveling around Japan, but he made the storyteller's fatal mistake when he broke suspension of disbelief by not being able to answer a basic question about the Shinkansen. "What was Japan's train network called? Shinasomething?" "I don't know." I think it dawned on him later that he failed the test because he avoided me until he was off on his next travels.
02.11.2025 21:51 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Being Human I spent a long time thinking Being Human and Humans were the same show with a rename in the middle of its run, like Seaquest received in its time. It turns out Humans, at least the US version I watched, is Friends meets The Originals without the uncertainty about how they afford their apartment. There are people who will tell you the UK version is superior. There are people who say that for every US remake. Sorry, I just can't seem to get into most UK media. I have the same problem with the anime that makes its way over here outside a few shows like Fullmetal Alchemist and Digimon. There is no accounting for taste, and that's okay.
02.11.2025 21:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Revisiting The Walking Dead Circumstances aligned recently to allow me to catch up on The Walking Dead. I started with spinoffs: Dead City and Daryl Dixon. I had already watched most of Fear The Walking Dead and the first season of World Beyond. Fear was better about not lingering. World Beyond was too much teen drama, but I don't think it was meant for me. The spinoffs aren't bad. Season 11 of the original series was okay. I just can't seem to find the interest to keep going on the earlier seasons I missed without that lost community aspect we had on social media back then. Maybe it's not really about The Walking Dead; maybe it's about the communal feeling that seems increasingly hard to access in our modern world, and that made the pacing more bearable by giving us time and space to discuss things. There's an alternative universe where someone with influence made The Walking Dead fix its pacing issues before the entire fandom vaporized. If you were one of the original fans of the show, you remember it dominating Twitter with every episode. For a while. Having all these spinoffs exploring the world of The Walking Dead would have been the jackpot for me back then. Anyway. At least give it a watch. I dropped out at the Whispers. Some people dropped out all the way back at the farm, or the prison. Try the spinoffs as soon as it stops being fun to watch. I find it's best not to linger on things you don't enjoy. That's how you become that person who bursts into every conversation about the thing to complain, even if the conversation was positive about the thing.
02.11.2025 21:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

If you're looking for your first audio recorder and think "I don't need one with XLR inputs," think again. I thought that with my zoom h2n back in 2018 and wish I'd spent a little more on one with at least one input.

29.10.2025 12:22 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Headlines don't crash economies Many years ago I asked on one social media site or another what the next big crash would come from after 2008. Subprime auto loans were the top suggestion. HN Search powered by AlgoliaHacker News Search, millions articles and comments at your fingertips.HN Search August 2025: Cars are so expensive that buyers need seven-year loans | Hacker NewsHacker News September 2025: Americans crushed by auto loans as defaults and repossessions surge | Hacker NewsHacker News October 2025: US car repossessions surge as more Americans default on auto loans | Hacker NewsHacker News And just two days before I drafted this article, a major subprime auto loan company went under. We're seeing a flurry of headlines in the last month about subprime auto loans. The problem is: this is a recurring headline, year after year, and no one who made a bet based on one would have won or lost by now. Let's review. January 2015: Investment Riches Built on Subprime Auto Loans to Poor | Hacker NewsHacker News March 2016: Unpaid subprime auto loans hit 20-year high | Hacker NewsHacker News January 2017, it's on people's minds: Ask HN: What is the next bubble in your opinion? | Hacker NewsHacker News October 2022, over 5 years later: Wall Street Warns of Trouble Brewing in Auto Loans as Prices Dip | Hacker NewsHacker News November 2023, a year later: Delinquencies rise on mortgages, auto loans and credit cards | Hacker NewsHacker News Let me make this clear in case it sounds like I'm downplaying the problem: subprime auto loans are a problem. But the thing that kicks the economy in front of the bus is generally not something you see in headlines before it happens. As I write this, the US government is shut down. Federal workers are furloughed. SNAP benefits are expected to run out, and November payments might be delayed just in time for the holiday season. Like 2008, we're looking at a sequence of events unfolding that could be the thing to kick the economy over the ledge. Most likely, someone will blink, a bill will be passed, and we'll all memory hole this until the next one. If I had money to bet, it would be on the failure of one of the big chatbot companies drawing fear and scrutiny to the AI market, leading to a bank failure, leading to long-deferred scrutiny of market fundamentals across the economy. Consider 2020 and the early days of covid-19: businesses suddenly discovered a capacity for allowing working from home. Grocery delivery took off, and you can call an Uber for the big stuff. Commercial real estate took a huge hit because its value is often used as the basis for loans to buy more real estate. Drop the rents or sell for less to respond to structural changes, the value goes down, and triggers for things people like to avoid written into debt terms go off. So they don't, and you get headlines about apocalyptic office parks. When the catalyst comes, will people be as resistant as in 2008 to letting their cars go when economic pressure forces them to make hard decisions? Will return to office prove to be a short-lived thing? If so, will it accelerate the long trend toward urbanization? If that happens and office work moves to corner shops and downtown coworking spaces, how long can real estate companies pretend like their massive office complexes in the middle of nowhere are worth anything with no one in them and no viable way to redevelop them for other things? No one knows until the music stops. When it does, it will take years to shake out, and it will be some thing only industry insiders knew anything about. There will have been warnings, but insiders get caught up in the boom and, if they see the bust coming, don't know how to avoid it. ## What do do instead of watching the headlines * **Diversify skills** : specialization is great when times are good, but requires you to be among the best at what you do. Becoming more adaptable will serve you better in the long run. You might not be the best musician or writer, for example, but you can combine them in a unique way that appeals to enough people to provide a little extra money * **Mutual aid** : Get to know the people and organizations already supporting your community. They aren't all religious. Probably. * **Community** : Find your local Discord servers, blogs, and forums. At least have an idea of what's going on. Local Facebook groups are dominated by people posting ads even when the group explicitly forbids it, and Nextdoor is mostly lost pets and people calling cops on neighbors they never met. You might notice I didn't say anything about "networking." That's fine to do, but it should flow naturally from the other things. When you put out notice you need a job, food, or just someone to hang out with, you'll do better with people who already know you. Networking is a good way to get an introduction for an interview. Being known for what you do is a good way to get offered a position created for you. And sometimes people have too much of one thing or too little of another and are happy to swap or give away. If I were going to stick to one recommendation, it would be to read Carl Sagan's Demon-Haunted World. It's what set me on the path to developing a healthy skepticism built on empathy and patience rather than scorn and cynicism. Hate never changed a mind for the better, and skepticism is why I don't flap in the wind of headlines. The next is 1984. George Orwell was a keen observer of politics and human nature. The book is often mistaken as a look at a possible future. In reality, it's a collection of models of political tendencies. After you read it, the difference between people who refer to the book but haven't read it and people who have is stark. You'll see real-world examples that could have inspired Syme's monologue on Newspeak everywhere.
24.10.2025 20:31 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
4/4: A beat you can dance to Spend any time immersed in music discussions and you'll see people ask why 4/4 is so popular. The short and most precise answer to all questions of music and music theory is: centuries of changing traditions spanning the rise and fall of empires lead (and led) to ever-shifting conventions and it just so happens that 4/4 is popular in our little snapshot of time. In fact, it's so ubiquitous that we call it common time and gave it its own little symbol. As musicians try to break out of it in the 2020s heading into the 2030s, it's likely other meters and music that defies simple numeric description will take the lead and 4/4 will fall out of favor for a while. The long answer to "why is 4/4 popular?" is that no one really knows for sure, but there are some great candidates for an answer. Personally, it's familiar and Ableton Live defaults to it, so I tend to just roll with it. In general, the causality goes both ways: 4/4 is popular because everyone uses it, and everyone uses it because it's popular. It's the same situation with pianos being built around the Major scale. Play all the white keys from C to C, you get C Major. Play from D to D, D Major, and so on. We end up with a lot of music written in modes of Major. The black keys, five of them, are naturally pentatonic as a result of their role of providing flats and sharps to create minor modes from Major, so end up in a lot of dance music where simple jumpy melodies do best. 4/4 is just one way of marking time for music. You could come up with any number of ways to mark that time and get the same result. You can even throw in some irregular beats like some music traditions do and arrive back at the regular beat when it's all considered as a whole. The generally agreed upon answer is: 4/4 sounds good and you can dance to it. Here are some high quality discussions I found while researching this question. What made 4/4 time the most common time signature?Most music is written in 4/4 time, and in today’s world it seems to be the accepted norm. Now, that doesn’t mean mainstream music doesn’t use alternate meters, but it’s just less common than I imag…Music: Practice & Theory Stack ExchangeTaco > Has 4/4 always been the most 'natural' time signature for music? Is there a reason for it? > by u/anyonethinkingabout in askscience Here's one for you to ponder: why does everyone want to know about 4/4 and not the equally popular 3/4, like in Billy Joel's Piano Man? Or the somewhat popular 5/4 you might know from Mission Impossible. Or the 7/4 of Pink Floyd's Money. Or a mix of 3/4 (verse) and 4/4 (chorus) like "Lucy in the sky with diamonds" And the answer is the same: each one rises and falls in popularity as people find a way to use it. The songs up there probably influenced a boom of other songs with the same meter. If you have other quizzical queries about ordinary things, drop me a line. ## Sign up for Kye Fox Official site of writer and musician Kye Fox. Subscribe Email sent! Check your inbox to complete your signup. No spam. Unsubscribe anytime.
07.09.2025 11:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
What's your imponderable? I like trivia. Some influences: * Imponderables - The word imponderable itself is, according to Merriam-Webster, from 1794, but the series ran from 1986-2006. It asked the kind of questions I ask about everyday things like: why _are_ the wafers they cut CPUs out of round when the CPUs are rectangles and squares? * The Guinness Book of World Records - I read a few of the actual book cover to cover! * Forums and chatrooms - You meet all kinds of people who know things and like to share. It's been a while since I emailed you because what I was doing wasn't working. Meanwhile, the notion to start a Q&A/advice column type newsletter has recurred for over a decade at this point. So I'm going to try that here with the 129 of you who've signed up to the list. I want to know those oddball ponderings you can't quite figure out how to search for, but know must have an answer. By the way: the answer for "why _are_ the wafers they cut CPUs out of round when the CPUs are rectangles and squares?" is: the modern process that drives everyone towards TMSC's ever-shrinking nodes starts with spinning silicon in a process called the Czochralski method which produces round ingots of silicon called a boule, and spinning tends to produce rounds things. You can see an explanation for different processes for producing silicon here. And you can see a video that details the process of turning these wafers into CPUs here. You can sign up or manage your subscription here. I'll set up a dedicated newsletter for this if it takes off. So: what's **your** imponderable? Reply and let me know. I'll answer it in the newsletter if I can.
31.08.2025 11:28 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Original post on kyefox.com

Something I started to notice is some people become fixed in an era. They might have been progressive for that era, but the world moved on and they didn't. They would have marched with MLK, but gay rights are too far. Or they were pro-gay rights in the 90s, but trans rights is too far. And so on […]

04.08.2025 17:24 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Original post on kyefox.com

Some people have so fully bought into the blue state/red state lie that it's led them to what I can only describe as actual evil.

It goes so deep people will even say nonsense like "it's not the kids fault their parents voted poorly" as though they have any idea how the parents voted. It's a […]

04.08.2025 17:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I hurt my tailbone once and the whole time it was recovering I thought about how unfair it is to have the thing to injure but not the thing to play with.

04.08.2025 17:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The greatest lie TV ever told is that Interpol is a police agency with cops with guns and badges and authority.

04.08.2025 17:22 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I remember being anti-feminist until I finally listened to all the friends telling me to read bell hooks and Judith Butler. I realized anti-feminists/MRAs are rank amateurs in their criticism of feminism next to Black and queer feminists.

04.08.2025 17:22 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

It's hard enough questioning what we believe about ourselves. Questioning what we believe about others is another much harder level. A recurring issue I see is people who've done the work on gender and sexuality re: their own experience fail to do that work with their beliefs about other identities.

24.06.2025 21:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

If I had money, I would bet something comes out in the next year or two that obsoletes transformers and moots all the concerns about them. LLMs kind of took everyone by surprise with how good they are, but they've induced people to look for what's next, or maybe what was left behind in the hype.

21.06.2025 19:08 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

β€ͺ[as I'm being hauled to the front edge of the ship's deck to be thrown overboard for making too many puns]

"ah yes, the punwale"

17.06.2025 15:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Soul Caliper, a game about a ghost phrenologist.

16.06.2025 13:56 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

How too many people see nonbinary: a center position between man and woman

Nonbinary in reality: a table of identity and experience where some items may or may not be associated with masculinity or femininity.

Assigned gender is also pretty meaningless, especially as we come into our own identity.

15.06.2025 15:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0