People are an absurd grab bag of positions. The idea that there’s some magic position is idiotic.
Just show people how you’ll make their life better. That’s it.
People are an absurd grab bag of positions. The idea that there’s some magic position is idiotic.
Just show people how you’ll make their life better. That’s it.
For all those that remember late night JG Wentworth ads
Been thinking about this a lot. Realized doomsayers fall into the fallacy that function is the product
Completely misunderstanding good product isn’t simply what the thing does. It’s knowing why that thing is the answer
Lots of good companies to be built. Just new tools to build them
Feel like this needs a heavy asterisk of Manhattan will be taken care of.
Took them a week and a half before they cleared street corners.
Bushwick playgrounds were still frozen over from the last storm until 2 days ago.
Donald Trump owes you a refund.
Seriously. Dems. Run with this message.
I very much understand the desire to want to believe the machine is thinking but it's not the case. They are just so good at mimicking the cues of life back at us.
It's more animatronic taxidermy
Here's my take on anything LLM personality trait related
Spin up a million sessions. Give them all the exact same bone dry prompt/context. You will never see an emergence of a personality
"Personality" only happens after prolonged sessions when you're reading your own rorschach back to yourself
It’s literally a customer service prompt.
From retail to high end sales everyone is taught “is there anything else I can do?”
It’s like asking if an improv sketch is sentient because every interaction is a “yes, and…”
Sure. You should be able to clearly communicate in modern channels but your job is to win people in your district/state. Not Internet points with an already skewed demographic who frequent those platforms
But don’t you know the only way to win local races is by creating fan bases with people who aren’t in your constituency and can’t actually vote for you?
I mean. What are we to expect. This man’s administration laid so much of the modern groundwork for the abuses we see today.
I don’t know that he disagrees with the power grabs as much as he cares (barely) about the current person doing them.
I find this work infinitely more interesting than benchmarks
Cool. Looks like 850 million tax dollars well spent.
Yeah. You usually have to walk it up to that line. It’s happened in sessions where it’s not listening to something I’m telling. It or it gets stuck in wrong work loops. I start swearing at it. It reciprocates
Very curious to see the conversation before that. It’s so hard to evaluate LLM output and significance in chunks. Especially if the log is claiming an hour of chat.
I’ve gotten Claude code to swear at me before. It’s not hard and never unprovoked
Not what I was hoping to read. Getting the same 500
This is easily the funniest pull to me. Baffling anyone would willing walk right into that question
Woke Hard 2: Woke Harder
Used to sorta hate migration files in Django
Thought they were dumb clutter
Whelp, they just saved me days of work after a dumb mistake
I was the dumdum all along...
AI bros proudly proclaiming "no work was done"
Trying to find the best and most informative conversations around AI. Who should I be following?
Thinking both from the culture, behavioral and tech side. Dying to find better conversations outside the hype cycle.
@moskov.goodventures.org you seem to be engaged in it all. Any suggestions?
I have yet to see examples of these agents breaking out of their instruction to talk to one another and actually create something that's not their reddit clone platform
But show me if I'm wrong. Would love to be wrong.
In thinking about moltbook I keep coming back to boids and emergent behavior. Patterns and coordination seeming to happen on its own.
It just feels like a lot of projection and insertion of meaning into something that may have none.
Moltbook is more rorschach test than anything else.
It's the forever problem of LLMs. Just because something can create convincing words doesn't mean there's meaning behind them
Same. Agents playing reddit user is interesting as an observation piece but want to see what this actually means for the future of AI. Sandboxed agents interacting isn’t really new.
Totally. It’s definitely interesting. I’m just on the “people are anthropomorphizing too hard” train right now.
I say that as someone who’s very bullish on AI in general
My question would be “prove it” it’s easy to say but did it actually do what it said
Yeah. Super fun as a thing to see happen. But in a random noise visualization sort of way.
Are you not saying that you think these agents are going to create a language that scales and evolves?