And since I've become a full time github cryptid, if you dislike the idea of Substack, you can find it here instead.
@neutral.zone
feral academic dropped out, tuned in. usenet lineage. tuned to noise, fluent in ghosts, invisible geometry made legible, still posting. unsure why. xennial cadence in the vibe universe, a hundred reasons to go, I will never log off ⚠️ WELTZSCHMERZ
And since I've become a full time github cryptid, if you dislike the idea of Substack, you can find it here instead.
Oh. I wrote a follow-up. I almost never write prescriptively, but this is actually the answer to what I wrote about earlier this week. So enjoy this rare treat, I guess.
For what it's worth, I do think that symbolic is actually the right long term path. I just don't see us getting there as quickly as we did with LLMs, because LLMs were basically a 'gimme' from existing branches.. parallel infra that slotted into general purpose (however dangerous that ended up).
Assuming that's an accurate read, do you see a role for runtime governance architectures? Things like constraint enforcement, audit trails, operating envelopes, as the system-level complement?
That's kind the bet I'm making with my agent governor, which is why I'm asking.
Okay, so that's a strong report. The question I keep landing on: your safety bar requires reliability and interpretability, as well as the ability to intervene, but the policy recs are weighted toward model level research (neuro-symbolic), right? (breaking this up to two posts, sorry)
Ooo, I'm going to have to read this more closely now
At last, my report on AI safety is live.
In a follow-up to a report published last year, I argue that the high-bar for AI policy is to make complex, SOTA models suitable for safety-critical domains - where failures during operation would cause direct harm. 🧵
newlinesinstitute.org/tech-econ-so...
It hasn’t impacted anything brain-related as far as I’ve noticed, but I wouldn’t be surprised if there were some changes people get from better regulated eating.
Category error and opportunistic pedantry are such a classic pairing.
Another day, another "words mean things" versus "no they don't" discussion on the butterfly distribution platform.
Nothing to see, just very powerful pattern matching. www-cs-faculty.stanford.edu/~knuth/paper...
the miracle is getting academics to do reliable documentation,
2027 is the year of the Linux Desktop.
Hey, come on. Those were Libyans.
This place seems to be having some fun issues today?
Proxy leverage is one of those patterns that empires constantly do because it's cheap now, even if it ends up much more expensive later. Kind of like tech debt, but with funerals.
I mean, also worth noting that the "scanner hijack" content is the point.
Reposting clips/transcripts/times/channels/etc turns it into a memetic weapon and invites copycats, which is why I’m not boosting the payload. (Fun that the FCC is the best we can really manage here, but so it goes..)
I know everyone here is allergic to Substack, I'm going to set up a parallel infrastructure, I just haven't got around to it.
Anyway. Enjoy your toasters.
whatever
California's law is aimed at CONSUMER onboarding flows and ecosystems. This isn't going to impact your containers or your AWS instance.
Hardware's come a long fucking way since Palladium. This is not the 2000s. .. then you factor in the hardware shortages due to the manufacturing shift from AI ..
"Linux is open source" yes hi thanks I've been using it since Slackware 2.2. I lived through several rounds of this already.
Open source doesn’t save you when the enforcement surface is hardware and distribution chokepoints; as computers become more appliance-like, safety considerations shift.
Not a Californian? Consider the damage via export by market size. Just like Texas and schoolbooks.
This is the kind of shit RMS could have helped with if he hadn’t immolated all the goodwill and leverage afforded to him over the years.
I’m surprised I missed this reporting last year, this is some of the most Californian “protect the kids” shit money can buy.