AI promises to optimize away uncertainty. But generative AI just flattens complexity into a neat interface. Public interest tech leaders need a different approachβone built on relational infrastructure. charleyjohnson.kit.com/posts/funder...
AI promises to optimize away uncertainty. But generative AI just flattens complexity into a neat interface. Public interest tech leaders need a different approachβone built on relational infrastructure. charleyjohnson.kit.com/posts/funder...
@michebox.bsky.social and I have a new essay on how funders must rethink how to lead through tech-inflected uncertainty.
New post on redesigning human-machine workflows, and what must remain human. charleyjohnson.kit.com/posts/what-m...
Give it a read / listen and sign up to Untangled while you're at it - untangled.substack.com/p/what-if-we...
But more than any specific recommendation, the Bill serves as a reminder of the kind of world we could live in. It articulates an alternative future that we could inhabit. And hereβs the good news: we know how to get there and state legislators are increasingly receptive.
Data minimization over consent: Instead of relying on checkbox fatigue, the bill prohibits using personal data from outside chatbot interactions.
Private right of action: Harmed individuals can sue directly, not just rely on overwhelmed state attorneys general.
In our conversation, Ben and I dug into the key provisions in the Bill, including:
Product liability: The bill leverages centuries of product liability law to hold companies accountable for design choices, rather than treating chatbots as neutral tools.
Yet, as @benwinters.bsky.social points out in our conversation, every aspect of a chatbotβfrom training data to interface design to what responses get blockedβrepresents a series of choices by companies. When those choices foreseeably lead to harm, companies should be held accountable.
Tech companies have successfully made chatbots seem like mystical, uncontrollable entities while simultaneously claiming they can be trusted without regulation.
Today, Iβm sharing my conversation with @benwinters.bsky.social, Director of AI and Privacy at @consumerfed.bsky.social about The People First Chatbot Billβmodel legislation for regulating chatbots thatβs been endorsed by over 70 organizations.
We're in a moment that desperately needs imagination and curiosity.
Remember: this is a capability humans have that machines will never possess.
Read the full piece - untangled.substack.com/p/the-intell...
This capacity for imaginative leaps β for seeing new possibilities beyond what the data shows β is what makes us human.
It's also what allows us to recognize when our entire framework needs replacement, when the categories we're using are part of the problem.
Abduction is the power of a hunch, a gut instinct, seeing a wet street and making a contextual guess (not just concluding "it must be raining because rain makes streets wet"), as Erik Larson reminds us.
This is why AI systems break when they encounter:
β Novel situations
β Exceptions to patterns
β Unlikely events (the "long tail problem")
What's missing?
Abduction β the reasoning that moves from observation to hypothesis. The detective work of seeing clues and generating explanations.
But! World models fall into the same trap as LLMs. They're both doing induction β observing patterns in past data to predict the future. And induction has a fatal flaw: it assumes the future will resemble the past. The sun rose yesterday and today, but that doesn't guarantee tomorrow.
His solution? World models trained on video games and robotics data -- on the assumption that intelligence emerges from interacting with an environment.
World models won't save AI from its fundamental limitation
Google DeepMind's Demis Hassabis recently said LLMs "just predict the next token based on statistical correlations" and "don't really know why A leads to B." Glad we solved that mystery!
New post out this Sunday on the myth of the crowd, and why weβre all speculating on uncertainty. Subscribe to Untangled today to get it in your inbox. untangled.substack.com
This isn't a democratic market of independent thinkers. It's a hierarchical system where a small elite signals, and everyone else reacts.
The result? Accuracy that looks like crowd wisdomβbut is really just a reflection of power.
A study of 500 Polymarket contracts found that information doesnβt flow evenly. It cascadesβfrom elite traders down through the system in predictable
sequences.
β High-frequency traders move first
β Active traders follow
β Retail traders trail behind
The "Wisdom of Crowds" is a lie.
Prediction markets claim to reflect the wisdom of the crowd.
But new research shows they actually reflect something else: power.
Thanks so much for the shout out @newpublic.org !
@charleyjohnson.bsky.socialβs β¬course on July 19-20 is a great resource for folks working toward true systems change.
It comes highly recommended by our Community Engagement Manager Hays Witt, and leans on many of the multidisciplinary tools and principles we use in our work.
Subscribe to Untangled to get it in your inbox - untangled.substack.com
How to reclaim our agency in an age of AI.
Alternative visions of AI that center consent, community ownership, context, and donβt come at the expense of peopleβs livelihoods, public health, and the environment.
Boomers, doomers, and the religion of AGI.
How the companies pursuing this approach represent a modern-day empire, and the role narrative power plays in sustaining it.
The scale-at-all cost approach to AI Big Tech is pursuing β the misguided assumptions and beliefs it rests upon, and the harms it causes.
On Sunday, Iβm sharing my conversation with @karenhao.bsky.social award-winning reporter covering artificial intelligence and author of NYT bestseller, Empire of AI. We discuss: