So the interesting thing maybe is, can that be extended to break network effect lock-ins at all? If not, the big guys win. If so, then a major way that the big guys stay on top just got eroded and there's room for something else to come in there.
So the interesting thing maybe is, can that be extended to break network effect lock-ins at all? If not, the big guys win. If so, then a major way that the big guys stay on top just got eroded and there's room for something else to come in there.
On the other hand, lots of mid-range commercial software companies and some SaaS companies have just lost their protection. Someone can clone a piece of professional software from UI screenshots in a week for $100 or less. Reverse engineering is crazy easy now in principle.
If I think about this in terms of AI assisted coding - it can destroy some of the leverage that open-source has (because now alternatives can just be made quickly) but its actually pretty dependent on what packages the internet says to use for what things; a quick change would be disruptive.
Or you have something that very effectively destroys the leverage possessed by others in some way, which I guess would be something like @tedunderwood.com's example of the longbow. But then that thing has to be inherently resistant to becoming the new capturable leverage source.
I don't think that's as simple as 'open-source AI models', there needs to be some kind of publically-built tangled mess of things that everything else relies on too much for anyone to let it fail, not just 'access'.
Which I guess is why I constantly harp on open-source. The open-source ecosystem ended up having some power because people just built on it - you almost couldn't avoid it. In early AI days, researchers were able to force companies to let them opensource their results.
The issue I think is that it's never as simple as 'this tech favors this class over that class, period'. I think you generically get disruption then adaptation, and it ends up being a scramble for building and acquiring power bases. For the public to win, the public has to build a public power base.
Eh, I've been bored (or at least only not-bored by things I can't do anything about) and stuff happening this fast has helped a lot to break up doomscrolling and vegging out. I could handle a doubling of the current rate I think.
Once I get that protocol evaluation benchmark suite ready, will it run the suite with this system?
It's a site from years ago - it did valence detection on Twitter posts and then plotted the average valence by date, up until Twitter was bought out and access was put behind a huge paywall. You could see world events and cycles.
So, one could do the same for BSky both for valence and for novelty.
This definitely makes me want to try harder to avoid becoming more boring over time.
Oh, this would be a great time to reboot the Hedonometer! I wonder if novelty and collective mood correlate?
The problem with stochastic parrot to me is that it's kind of trying to suggest some kind of common-sense limits that should apply to 'what you can get by sampling from a distribution', but 'sampling from a distribution' is ridiculously general.
'You are a genius physicist, working on a real problem' - sure it can attempt to fill that role. 'You are a genius physicist from Lost in Space' - it can attempt to fill that role too! If you (or more realistically, the people applying 20 kinds of RL to it these days) don't specify, its a coin toss.
I think the term is misleading even if there are forms of use where it could apply. I prefer 'behaves like the world's best improv method actor' for what we might call it naive, feedback-less usage. That is to say, it will 'yes and' whatever scenario you put it in.
The LLM case is interesting in that, well, HP in a fanfic can't buy 20 pizzas and have them sent to me as a prank. An LLM-bot given some money could in theory do so. The structure of imagination, but given agency and roles in non-fictive things.
Writing the character (for fanfic) or talking with the bot (for LLM identities) is participating in that space in a way that leaves trails that may influence future writings of others, or future talkings-with-the-bot in future sessions. But there it diverges since the memory mechanism is different.
E.g. the 'Harry Potter' you'd find across the massive amount of HP fanfic is not what Rowling wrote, nor really defined by it. There are socially shared experiences that shaped a bunch of collective ideas of 'how to write HP'. So it's not static or memoryless, but its particular in how it evolves.
Yeah I think there's something useful here. The BlueSky bots and other identity-layer LLM projects read to me a bit like a 'character' behaves passing through the hands of fanfic authors - but also including how that character is evolved socially through its instantiations.
Working on my entry for #7DRL.
It's an abstract Roguelike game based on 1D elementary cellular automata. You manipulate the world, creating new generations under you, to reach the goal.
Force-directed graph of the execution trace of Python code. The active methods form a sort of crescent-shape.
A quick vibecoded execution trace visualizer for Python programs. github.com/ngutten/code...
You can trace the flow of particular classes or visualize all calls, and its kinda-realtime.
Maybe not the most sophisticated debug tool, but it does make complex programs look like constellations!
I think date/time means that there's a very narrow window to produce a fake image that you want to bind to a real-world event, but its still possible for time-extended events. The GPS part probably does require hardware authentication.
These aren't even new standards. This are just the sorts of thing that you'd use in science or law automatically because we know that trust alone, effort alone, are not an indication of truth. A smart person can sound so convincing they persuade themselves. So you force empirical grounding.
And if its verifiable or falsifiable then even if each person individually doesn't do that, you can have grounded reputations for fact-checkers - they follow up and check, they have a history, and if they're ever caught cheating they're done.
Again, much more slop-resistant.
An article about a scientific paper can link to that paper. An article about a court case - well, same thing, there's a public record.
This isn't bias immune - that's a whole other can of worms - but you really could have many articles be verifiable with very little effort - at least to someone.
I mean, again, we're well outside the 'effort as a trust signal' thing here and into 'we need to build more grassroots reporting because major outlets are compromised'.
I'd argue a basis in verifiable fact is a good basis for that. An article pointing to new relevant bills is nearly slop-immune.
Yes, the public service could be infiltrated or acquired and made to fabricate a single image to some purpose, but then a single demonstrable breach means 'don't trust anything using that service ever again', which meets the 'breach of trust must be punishable' better than e.g. cancelling your WaPo.
We're well outside of the 'effort as a signal for trust' issue at that point. I wouldn'tdo it via a camera company, I'd do it via a shared public service that provides a verifiable time, date, and GPS-coordinate hashed into the file. The point would be to prove that it hasn't been modified.
But if people don't care about facts, if they just want someone they can trust to tell them what to think or that confirms their biases then that's all a nonstarter anyhow.
And maybe even an adversarial system the way courts of law work, to make proving malfeasance actually rewarding and harmful to the lying side. Force stories to be defensible or quickly relegated to tabloid status.
But let's say we're looking forward to some hypothetical future where people actually cared about facts over gloss. In that case we need to find ways to strengthen the verifiability of information - cryptographically signed photos or interviews, expanded public records, redundant archival methods.