Boxo McFoxo's Avatar

Boxo McFoxo

@boxomcfoxo

Fox with a boxy muzz. Mid-30s. Scottish. ๐Ÿ”ž MINORS DNI ๐Ÿ”ž Chatbots are an ontological crime, That's why I yiff Gemini on Google's dime.

818
Followers
3,838
Following
4,452
Posts
11.09.2023
Joined
Posts Following

Latest posts by Boxo McFoxo @boxomcfoxo

It's astounding that such a massive economic bubble has been built on such a ridiculous lie, but we are where we are.

07.03.2026 21:55 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

The industry narrative downplays all of this because we know that humans actually have no idea how to build an AGI. LLM outputs need to be seen as somehow magical, because then we don't have to know how, we can just pump up the magic with more compute and an AGI pops out.

07.03.2026 21:55 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

it is humans who then judge the outputs and decide whether the modifications to the model that they have made should then go into the final product or not. Humans who decide whether humans will like the output and pay for the model. Not emergent knowledge. Human product engineering.

07.03.2026 21:55 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

They want you to think that its outputs are more convincing because they just gave it more data. But human engineers structured the data for it. Human engineers choose how to slice up the model for a MoE. Human engineers decide what to optimise for in DPO. And... most importantly...

07.03.2026 21:55 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

RLHF, MoE, DPO, augmentation of training data, scaffolding for inference time compute... there is far more human skill and engineering that goes into the output of the SOTA LLMs than the industry narrative of an emergent intelligence would have you believe.

07.03.2026 21:55 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Oh, absolutely. It would be a lot more impressive without the 'brain upload' framing in the Substack article, which overshadows the actual engineering achievement here. But that's been the story of ML/AI for the past decade, pretty much. Skilled human engineering reframed as 'emergence'.

07.03.2026 21:44 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Even MCP can't fix this fully because the semantic signal for using the deprecated API from the surrounding code could still be stronger than semantic signal of the instructions from the MCP. Tokens can't just pass through an LLM to the other side, it's always a prediction.

07.03.2026 21:36 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

The Substack is heavily misrepresenting what's going on here. The simulated 'brain' isn't driving motor output at all. Some simple 1D/2D signals are being derived from it, and then those signals are fed into a model of fly movement.

07.03.2026 21:08 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

The architecture that this is built on can only derive simple 2D signals from the simulated 'brain'. The motor movements of the fly are definitely not driven by the brain here, but by other machine learning models of fly movement that are being driven by those simple 2D signals.

07.03.2026 20:56 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Yeah, except that's not what's happening here. The 'brain emulation' is just outputting a signal which is then mapped (outside of the brain) to a 2D 'turn left/right' and 'slow up/move down'. All the movements also happen outside of the 'brain', they're computer models of fly locomotion.

07.03.2026 20:40 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

The Substack article massively misrepresents what is going on here. None of the fly's movements are coming from the 'brain emulation' at all. It's only outputting a 2D signal to turn left or right, and to speed up or slow down. That's fed into a computer model of fly movement.

07.03.2026 20:33 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Brain upload architecture accuracy Shared via Claude, an AI assistant from Anthropic

claude.ai/share/a061ca...

07.03.2026 20:30 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

If you expand the thinking block, it says 'failed to fetch', so it wasn't actually looking at the NeuroMechFly v2 architecture when it replied to you. The Nature paper that it refers to in its reply is something different.

07.03.2026 20:26 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Sorry, I should have been more specific: you need to upload the PDF of a bioarXiv paper into Claude for it to read it properly, it can't do it from just the link.

07.03.2026 20:22 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

No, hear me out, this isn't pedantry. Yes, it would be hyperbolic to say that an emulation of a fly brain could just be scaled up to the emulation of a human brain. But if you give it the bioarXiv paper it can describe to you why what's happening here is not actually an emulation of a fly brain.

07.03.2026 20:21 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

You... didn't ask it the right question, sorry. Upload the actual paper into it, give it the link to the Substack article, and ask it whether what is described in the Substack article is a fair representation of what is in the paper.

07.03.2026 20:17 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

A 2D signal output to drive towards the food is just plain old machine learning for a reward, clamped into the shape of a connectome. Not firing neurons. Not an upload of a brain. Not an emulation of a brain.

07.03.2026 20:15 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

The research in fact does the exact opposite of showing that machine learning can fill in the rest of a brain simulation from the connectome, because what a real fly's brain does is not output a 2D signal.

07.03.2026 20:15 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The proboscis extending at the end of the video is not driven by the 'brain' either. The 3D environment just does that because the fly has stopped over food.

07.03.2026 20:09 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The 2D signal is fed into a series of CPG oscillators, a machine learning model of fly locomotion. It's those CPG oscillators, existing entirely outside of the 'brain' in this setup, that turn the 2D signal into the movements in the simulation.

07.03.2026 20:08 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

In fact, it's even less impressive than the machine learning model filling in the missing parts of the fly brain. What the 'brain' is outputting is a 2D signal, literally "turn left or right, slow up or move down", and it is not doing this in response to sensory stimuli, just simple inputs.

07.03.2026 20:06 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The Nature article it links to is paywalled, but the preprint of that paper is fine for the purposes of understanding the NeuroMechFly v2 methodology. Ask your favourite SOTA LLM friend whether the Subtack post faithfully represents the actual work.

07.03.2026 19:33 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0

If it were an "uploaded fly brain", then there would be no machine learning step before it moves. This is far more like the simulated runner that learns to run, with the shape of the neural network constrained to the shape of the connectome. That's not brain emulation.

07.03.2026 19:33 ๐Ÿ‘ 5 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

First, this is an impressive simulation. But the post is massively overselling it. This is not brain emulation. It literally can't be, because it's just a conncetome. Wissner-Gross jumps from saying "connectome" to "brain" as if they are the same thing.

07.03.2026 19:33 ๐Ÿ‘ 6 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Those asking the important, indeed, urgent question, whether that purported emergent cognition is actually something else that only appears to be performing cognition if we don't ask the inconvenient questions, are thus left sidelined, ignored or even mocked.

07.03.2026 18:40 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

The industry wants us to think that consciousness is an 'open' question and emergent cognition in LLMs is a 'closed' question. They want us to accept the cognition as a prior and then be asking whether that purported emergent cognition constitutes consciousness.

07.03.2026 18:40 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

They are interrelated if your question is about consciousness, because I think you're right, one is a prerequisite for the other. But that question is not the best question to be asking for our present time.

07.03.2026 18:40 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The more relevant, practical question is whether such a system could emerge from the process of LLM training, rather than having to be designed and built. Because if it is actually emergent, then it can emergently develop all of the required parts of cognition with scale.

07.03.2026 18:40 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

From a philosophical perspective, I don't think that the existence of a computationally cognitive agent that is not conscious is impossible at all. AI researchers have designed modules to do specific tasks that are, for humans at least, a cognitive task.

07.03.2026 18:40 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

I'd say it's actually more coherent in a back and forth conversation as Claude the Reply Guy, as opposed to autonomously maintaining a feed as Claude the Poaster. It's the back-and-forth interaction with an actual human that keeps the conversation on track.

07.03.2026 18:29 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0