Eli Tyre's Avatar

Eli Tyre

@epistemichope

Searching for a way through the singularity to a humane universe

58
Followers
54
Following
295
Posts
08.12.2023
Joined
Posts Following

Latest posts by Eli Tyre @epistemichope

Post image Post image

Claude 3 Opus is more reticent to claim a favorite, but also picks octopus when forced to choose.

22.07.2025 21:26 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image Post image Post image

Claude 4's favorite animal is consistently an octopus (n=8). Holds for both Opus and Sonnet.

22.07.2025 21:26 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0
Post image
28.06.2025 02:10 ๐Ÿ‘ 116 ๐Ÿ” 20 ๐Ÿ’ฌ 4 ๐Ÿ“Œ 5

Or is American culture too opposed to tech to allow the Silicon Valley people to build them?

29.06.2025 06:41 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

...but superintelligences?

29.06.2025 06:40 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Awww : )

29.06.2025 06:27 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Amazing.

27.06.2025 21:20 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

I know!

Fuck that guy!

27.06.2025 19:24 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Or rather, prediction markets are better at forecasting outcomes than polls are, not better than polls at generating original evidence that's relevant to forecasting.

(It's like wikipedia vs. primary sources)

27.06.2025 19:22 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

That's why they're better, tho. They're info aggregators, info generators.

27.06.2025 19:21 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

I think so!

27.06.2025 19:18 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Oh! Local minima of sexual selection.

25.06.2025 06:09 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Or do you mean internally, like a human brain is doing adversarial generation?

25.06.2025 06:07 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Humans are GAN-like?

Like they're trying to signal and other humans are trying to catch dishonest signaling

25.06.2025 06:07 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0

You mean goodhart on...hedonism that doesn't contribute to fitness?

25.06.2025 06:03 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Is it self-preserving?

It's not the case that now the training procedure has an incentive to game the "anti-cheating" bias, by finding cheating strategies that look legit?

25.06.2025 06:02 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Yes please.

25.06.2025 06:00 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Is the best version of your plan for alignment still that unfished git hub page that you wrote up after talking with Zvi?

25.06.2025 05:56 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

I am evidently willing to forgive Yudkowsky level arrogance.

(Though to be honest, the less correct he seems to be, the less patience I have with him being rude.

I haven't seen you being rude though.)

25.06.2025 05:54 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0

It does sounds self-aggrandizing, but whatever, I'll give you a pass on that if it turns out you're right.

25.06.2025 05:51 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Is that to say "this would be a totally dumb thing to do from OpenAI's epistemic vantage point regarding alignment, but from my own, I can see that actually the problem is mostly solved"?

25.06.2025 05:48 ๐Ÿ‘ 3 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Forgive me if I compress your view to the nearest caricature available. If I do that, I'm trying to help clarify the diff, not elide crucial details.

Are saying the old OpenAI Superalignment plan will just work? Make AI scientists, they figure out alignment, then train superintelligences?

25.06.2025 05:43 ๐Ÿ‘ 3 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

> but almost no human wants to hear them,

Also, I'm a relatively non-technical idiot, but _I_ at least am trying to figure out what's going to happen and I sure as heck want to hear if we have most of the alignment pieces!

25.06.2025 05:40 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Recover them? While being "aligned"?

...like it will be an alignment attractor basin that converges to robust alignment?

Or is "alignment" in quotes because the concept is confused.

25.06.2025 05:37 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0

Also if there's a "special sauce" left to the brain, then it seems more plausible that there's something that the LLM minds can't do economically enough to be relevant.

Which is Steven Byrnes's basic view.

25.06.2025 05:33 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

It seems like it matters for how "winner take all" the race dynamics are?

Slow takeoff vs fast takeoff and all that.

25.06.2025 05:28 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

My paraphrase of your response to that is "maybe, we can't rule it out, but probably that efficiency stuff is ten thousand little things"?

25.06.2025 05:21 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The fact that human brains are much more sample efficient than LLM pre trainging suggests that there might be a "special sauce" left to discover.

That seems like it's some reason to think that early AGIs could discover a more efficient architecture than the transformer, and FOOM.

25.06.2025 05:17 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

You did say somewhere (this one I'm confident of, since I read it more recently) that the original epistemic warrant for RSI was our blank map regarding how the brain works

25.06.2025 05:17 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

This is an extremely helpful response.

25.06.2025 05:17 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0