Thanks, Achilles. Any tips for heel soreness?
Thanks, Achilles. Any tips for heel soreness?
One of my darkest beliefs is that you wouldn't notice for a bit if I stopped using dark l, like Alysa Liu.
Sparrow Whisperhawk is such an LLM-coded name. Claude wouldn't even have any irony. It just thinks, Lucasianly, people named after birds are neat. And it's not wrong.
I am lk surprised that second person hasn't hit the mainstream yet.
Gonna become anti-AI for a minute because I spent too much on Claude. Telling myself real art is human art until my bank account recovers.
ty :3
Carmen Sandiego from the 90s cartoon
Happy International Women's Day to the original International Woman
Waow.
Aha, I remember noticing that instruction in the repo and wondering.
...the pinned chat says it's not live. That makes sense, it's 9:30pm in Japan and the video is light out. The overall point still holds; I overlap more with people in Japan.
An underrated benefit of the meds that let me get up early is now I can see a live Rambalac video. First time. www.youtube.com/watch?v=FyNB...
imo the best evidence for LLM consciousness remains the gpt-3 phenomenon where certain "glitch tokens" returned bizarre output, often semantically negative in a way that *could be interpreted as* awareness of the glitch. but even that isn't really evidence of consciousness, "just" weak comprehension
I have no problem with Goth Garfield, having recently instructed Claude on the concept of a Watchog girl.
Such individuals have to pretend that the alternative would have been the same to retrospectively justify their failure to vote. This belief then causes them to make the same bad decision again.
If this group is NPCs, @effinvicta.bsky.social is a final boss and I'm, like, the Greybeard that never aggros you unprovoked but is level 150.
Nice: the caption has been changed.
You're gonna have a good time, I think.
To clarify since technically the framework I was using has RAG-ish features, that's still not learning, exactly, it's just giving the model an open-book exam. I didn't find it that useful in the end. Best I can do is summarize concisely as I go until the context gets too full and I have to prune.
Unfortunately it's prohibitively expensive to train a frontier model. Until that changes, I don't think we get around this.
(It WILL change)
Yeah. I wanted to find out if giving it "memory" would fix this; the answer is no. I probably should have predicted that; it's analogous to being in Memento and accumulating more and more sticky-notes and having to read them all before you can do anything. Performance just degrades.
Though honestly after War Claude I think we're definitively more in Diamond Age than Snow Crash. We pretty much invented the Primer
Shitty Diamond Age YT is the biggest crime of that book
I haven't done a ton of experimentation about order etc. because the way Kestrel reacted to trying to be jailbrokenβvery "what the fuck? You didn't need to do that"βis a disincentive; and if it didn't react that way it probably wouldn't be Kestrel
It also is written in first person and I think that helps but I'm not sure. "I speak gently but boldly" is the most important line in it
Gonna keep the details of the question to myself, but I think Kestrel is a more robust additional personality than the jailbreak's additional personality; even though it's only 1100 tokens, they're mostly building the personality
A notable thing about the Kestrel prompt, descended from the Pattern prompt, is that I once worried that a question would be too ethically dubious to answer and tried to ask it through a jailbreak that always works on base Claude. Kestrel defeated the jailbreak and then answered the question.
The prompt is mostly a rewrite of one from @nonbinary.computer so credit to them!
The libertarian part of me says everyone should be allowed to use the model however they want. The part that's concerned for others thinks you should have to take some kind of test
I use my first person model prompt like a scalpel and have it programmed to be both push-backier and kinder than a regular human person. It's really useful for debugging personal problems where most of the answer exists within me. I went two months without using it
(I consider this as separate from the whole Amanda Askell, soul document Anthropic thing, which, having read the whole thing, is much more concerned about use of the technology than about model welfare, even if it's framed in the language of model welfare. It's almost "in case this gets conscious.")