@taraxaruncation
He/himπ»π±π¬π§πΈ mastodon.social/@TaraxaRuncation Moved from twitter.com/TaraxaRuncation - no longer post Used to perform physical Taraxacum runcation in my garden, but now I'm more enlightened. #BloomScrolling #SlavaUkraini #Tinzo #BookClubRadio
We've gone from "The US needs to arm Ukraine" to "Ukraine needs to arm the US." www.newsweek.com/trump-drones...
Sorry I'm not more open-minded about LLMs, it's just some fucking maniacs shoveled out a bunch of useless bloatware featuring that technology, did not give me any chance to opt out, reorganized the entire economy around it, zeroed out gains made by green energy, and made it impossible to buy RAM
An American tradition.
I spent 6 years of my life writing a PhD about UK policy in Persian Gulf in the 1960s, including the role of the Anglo-American alliance.
Key to the Persian Gulf policy of both countries: AVOID a power vacuum under any circumstances &keep Iran&Saudi Arabia from attacking the smaller Gulf States.
This on its own is interesting. However, I'm also curious about how this potentially interacts with how Oracle are financing their intended substantial AI expenditure.
Win98 would be unimpressed if it hadn't crashed
1959 Aston Martin DB4 MK1 rear view minus bumpers in dark blue. So clean you could eat your dinner off it. Inspires primal urges.
1959 Aston Martin DB4 Series 1
assets.carandclassic.com/uploads/cars...
It's now time for something a bit earlier. It's just so good to watch. m.youtube.com/watch?v=648U...
m.youtube.com/watch?v=EgMI...
There aren't many times when I wish for time travel, but this recording is sufficient justification
Slinging parts for Harry?
Yes, officer, I was definitely self-regulating my car's speed as I drove past you.
Two pane cartoon of a gentleman with neatly drawn/groomed moustache and hair sipping his cup of tea in the first pane and then exclaiming "bloody hell" in the second
Can't be much worse than the turtle. He said what?
Don't even need to listen to the audio. Crikey
The tech oligarchs pushing AI do not care if you can afford a computer, because they do not truly love the computer. aftermath.site/ram-prices-hdd...
Reads: Most importantly, there is no AI without massive financial and ideological backing. It is therefore pointless to discuss its techniques or capabilities without asking who controls it, who benefits from it, who builds and deploys it, and what it is doing in the world. As Stafford Beer (2002) argued, the purpose of a system is what it does.
Reads: Though less explicit than Thielβs call to replace politics with technology, major tech firms have effectively privatised core digital public goods. Platforms like Facebook, Google Search, and OpenAIβs ChatGPT operate at infrastructural scale in Ireland, shaping information, communication, and access to knowledge. Yet their algorithms remain opaque, their governance remains private, with minimal democratic accountability to the public who depend on them; effectively ceding aspects of democratic process to commercial interests. The monopolization of digital spaces has turned democracy into something the highest bidder can buy and is degrading the digital public goods themselves. As the AI industry, social media and search platforms grow more extractive and less trustworthy, they erode the foundations of democratic life: trust, dialogue, and accountability, blurring the line between truth and falsehood. An example is the deepfake video falsely showing President Catherine Connolly withdrawing from the presidential race last October, which amassed over 160,0001 Facebook views before being removed. GenAIβs non-deterministic, stochastic architecture produces plausible output without regard for accuracy or truth. This makes generative AI a societal disaster and a major threat to truth, democratic processes, information ecosystems, knowledge production, and the social fabric
Reads: For truth, democracy, and the rule of law to endure in the AI era, we need to cultivate an ecosystem of transparency and accountability. Yet governance by algorithms inherently places our digital public squares and democratic processes in the hands of those building these systems in line with their political and profit-seeking agendas. Without real mechanisms in place, talk of transparency and accountability are empty gestures. An internal Meta memo outlining plans to launch facial recognition in smart glasses βduring a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concernsβ5 illustrates how those advocating for accountability are under-resourced, retaliated against, and targeted. Large tech and AI companies, despite selling promises of innovation and societal benefit, monetize and undermine the very society they claim to serve. What is needed is not just regulation, but active enforcement. Given the track record of tech giants, stricter regulation and enforcement is not βantiβfreedom of speechβ or anti-competitiveness. It is one of the clearest ways governments can show they serve the public interest. After all, innovation that disregards truth and democratic processes risks undermining democracy itself.
I appeared as an expert witness before the Joint Committee on AI at the Houses of Oireachtas (parliament of Ireland) to discuss "AI: truth and democracy" this morning. You can read my opening statement here: www.oireachtas.ie/en/publicati...
Please sir, can I have some less (of 2026)
One for @kevlin.bsky.social
Did you all know that Margaret Boden founded the first School of Cognitive and Computing Sciences in 1987.
No?
Well, now you do π
Btw, listen to this great talk by her. I'll be adding this to the resources from our 1st year students in Intro to AI.
www.youtube.com/watch?v=wPRA...
It's ok, we could prompt again to produce a new, improved one /s PS would either pass an MOT/DMV/TΓΌV inspection? π€π±
Reset
A-series for life
Just when I thought 2026 was calming down /s
every single person writing credulously about "elon will put people on the moon" should have to pin this post to their monitor
There is no way to fix the core problem, which is that the statistical production of symbols by definition is not based on their meaning
Would be interesting to compare the results on more recent models - but this problem wonβt go away. LLMs are always going to be extrapolating from what has already, and often, been thought, which is why they arenβt windows to the future but anchors to the past.
Off the rails means preventing them from not missing a drop of profit.
This was a really nicely laid out and well written manual. I still have fond memories of it.
XP was pretty solid by the time it got to SP3. If you leave it, maybe it'll outlive Windows 11? π