Sixty years ago, Jennie Lee's vision created The Open University π Welcoming all backgrounds, millions have started life-changing journeys. #OUfamily #TheOpenUniversity #OU60 π
@morungos
Cognitive/social scientist and occasional coder. Umquhile Mancunian. Purveyor of Jurassic Park memes. Writes on modernization and technology. Consciously uncoupling from corporate shenanigans. Halifax, Nova Scotia https://morungos.com/
Sixty years ago, Jennie Lee's vision created The Open University π Welcoming all backgrounds, millions have started life-changing journeys. #OUfamily #TheOpenUniversity #OU60 π
I know I wrote it, but I truly think this should be a massive story
The Secretary of Defense told an AI company to remove its bot's moral guardrails so it can operate surveillance and lethal weapons autonomously
When warned that could endanger our own troops, he didn't care
Totally agree. This is why I donβt think of AI as a technology but as part of the unfolding of reflexive modernization. Other technologies (surveillance, IoT, cryptoβ¦) are all part of the same unfolding. It is the re-modernization of all of industry. bsky.app/profile/moru...
In fact, precisely the same arguments that used to be used about latent semantic analysis in the 1990s. Although the "latent" there might have been a little more honest.
Exactly. A lot of it is just straight knowledge. What Google was completely missing when their AI told me an osprey was a wading bird with a long thin beak.
Thereβs usually a little group of them around the lake where I hike during summer. Beautiful things.
This isnβt a diss on that article by the way. Not entirely. Itβs more a lament. What will we lose by automating science? By focusing on the surface, on measures and metrics. Is what we lose valuable? To some, probably not. But to me, it was the craft of building science that made it all worthwhile.
In short, for me at least, itβs not research at all. Itβs a simulacrum of research. Itβs Wile E. Coyote research: trebling the effort while the point becomes increasingly distant.
Not only is it not my thing, I genuinely have no idea how Iβd have mentored researchers to handle it.
I read this and I am glad Iβm not an academic any more.
It feels like the entire nature of academia, a community, curiosity, inquiry, all of that has been replaced by a drive for productivity, strength, impact. Quality is no longer usefulness, it is a constantly shifting set of metrics.
I did look at this a little, because it crossed wires with my work on the ascription of mentality. In the end, I came to believe that perceived similarity is one factor that promotes that ascription. As so often, frameworks like these are fascinating insights into human psychology.
I found a great article a while ago which did a deep dive into the actual transcripts from three cases β not quite as catastrophic, but still ending very badly. There were consistent patterns. What this tech needs is some old fashioned qualitative research, but thatβs mostly been cut, globally.
Iβm not using that as an argument for blame legally. As a society, if something like that is a factor in causing harms, we do have a duty to address them and mitigate them. In effect, we need to be able to regulate it, as we do other media.
I donβt think reporting is the only issue. There is accumulating evidence that chatbots can, under some circumstances, reinforce harmful thought patterns. That has definitely happened in other cases. So reporting aside, itβs not unlikely the tech is a contributing factor.
This.ππ
How can we have a sense of achievement, of fulfillment, without working for it? We cannot self-actualize for free.
I am not sure I can make it there myself, and there are plenty of others who would benefit more and give more than I can, but damn, this is so tempting.
This looks like an *AMAZING* event!
First, COGS and Sussex does outstanding cog sci work.
Second, Andy Clark's ideas have blown my mind, positively, on many occasions. I rate his work extremely highly.
Third, workshops are awesome to develop good research communities. (And I hate conferences).
Iβd add absence of leadership and strategy, just reacting to events. From my experiences of being the first rat off sinking ships of employment, that was a surprisingly big factor in toxicity.
If they genuinely think people are going to let AIs have access to their credit cards, they're more delusional than I thought. A few rich bros, sure, but everyday folks on a budget? No chance.
That's a possibility, but the only way growth in retail can happen in aggregate is that smaller stores are driven out -- effectively all retail run by a cartel of global megacorporations. It can happen, arguably is happening, I don't see how AI enables it beyond what the internet has already done.
Maybe I am naive, but βAI-fueled growthβ puzzles me.
Generally automation doesnβt do growth β especially when you are already hyperscale. It may cut costs, usually by transferring them, e.g., to consumers. Butβ¦ growth? Someone will have to explain that to me. It sounds all hopey-wishy.
Supporters Our work is supported by a variety of foundations, charities, and individuals who share our commitment to high-quality journalism about AI (grouped by lifetime giving): β $1M+ Coefficient Giving (formerly Open Philanthropy) (2023, 2024, 2025) Survival and Flourishing Fund (2024, 2025) $100k β $1M The Casey & Family Foundation (2025) EA Infrastructure Fund (2023) Future of Life Institute (2024) $10k β $100k ACX Grants (2024) AI Safety Tactical Opportunities Fund (2024) Cullen O'Keefe (2025) Hazel Browne (2024) Newman Family Charitable Fundβ (2025) Robert and Virginia Shiller Foundation (2023, 2024) βββ We have also received donations of less than $10,000 from a variety of generous individual donors. ββ Our donors have no editorial control over the work of Tarbell, our fellows, or our grantees. Tarbell does not accept anonymous donations greater than $10,000. For details, see the Donor relations section of our ethics and standards policies.
Dude who wrote about how "the left is missing out on AI" is on here. Do you see who they are funded by? The biggest EA funders, the longtermist institutes we've been writing about and documenting, including FLI where Muskrat is still an advisor.
Our ability to make inferences about the behaviour of a system is more a property of *us* than of the system. So it is at ourselves and our reactions we need to look.
I donβt think thatβs it. βWhat we think of as computingβ was itself framed on an abstracted version of human behaviour. There are many computing-like things weβve had which were very different: cellular automata, GAs, etc. Computing isnβt some magical rational god-phenomenon.
Dennis nedry jurrasic park "see nobody cares" meme reads Hey everybody, this guy still posts on X! See? Everyone is horrified and disappointed. They feel it speaks directly to your values.
The basic setup is that it is driven by ranked constraints that evaluate candidates. Each person can have different constraints and rankings, but they all have to βworkβ well enough to be useful.
So so many, the problem.
But if I were you Iβd take a look at optimality theory, which intriguingly arose from attempts to bridge connectionism and universal grammars. Much of it is on phonology, sadly, which is quite technical, but itβs also valid for grammar, and itβs quite intriguing.
First, thank you for this service.
Second, this thread is gold. Itβs a real insight into how HRMβs management is so dysfunctional. But Iβm not sure how the city can save itself from it.
I mean, thatβs slightly flippant, but what we do know is, we do not communicate by predicting logical words to put into sentences. Thatβs essentially a behaviourist retrospective reconstruction. We now know better.
Thatβs like saying βhow does human language work?β
Language is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to language. (With apologies to Douglas Adams.)
PUT IT BACK. PEBBLE IS NOT FRIEND.