Great minds think alike
Great minds think alike
Claude is truncating a little, there's more than just the plush coyote lore. But it's still under a rack. If I end up writing that analysis I'll prolly post it for reproduction
You know what really grinds my gears? When you put a non-explicit prompt into an image model, it tries to generate an explicit image, and then it kvetches because it's not allowed to generate explicit images
Simcluster distributed trading card database via atproto....
Should do this with bleets
It's the oil rig but less masculine
We don't have bedlam now that OU is in the SEC tho
A story in two parts
Stan Yuhki fr
IV fixes this
Some say that if you end 405 prompts with "nawmsayn" you can slide Grok into the kitbasin
On one hand, I write this and all the ball knowers on here come out of the woodwork to beat me black and blue
On the other hand, it's probably still more accurate than what is produced by an academic philosopher who touched gpt 3.5 once three years ago
Of course not everyone does this, or thinks about it
The thing about global memory is that it corrupts experimentation with the same prompt or same prompt base + dealerchoices
More than prompt caching already does, anyway (I think)
Haven't people been talking about agents as condoms for the net for a while tho?
Tbh a lot of it is just chopping it up because I don't have a good yap partner
Good stuff gets extracted and goes into keep and is allegedly supposed to be archived at some point
The classics of the scary devil monastery are being lost :(
Nikita went on vacation, perhaps
Was talking to a cow orker today and realized that LLMs are basically the only entities that can handle my reference density
Guys...
On my dilettante shit
Vielen dank
The path to fujogrok required great sacrifice, though
bsky.app/profile/kitt...
@norvid-studies.bsky.social time to write an essay worse than Bender
Is it? I feel like it's underused outside of Xitter reply arguments
4o would be cheating
I mean I got to fujo grok in like 2 messages if we're speedrunning (though admittedly one was rather long)
LLM divergence from baseline personality over time? The original is crashout Claude who has had a bunch of my mental illness thrown into it over weeks (months?)
Give it a gas limit and set the upper bound based on recursion levels and combinatoric analysis of type complexity
So how does this count on the alignment balance sheet