Ber Mtuni's Avatar

Ber Mtuni

@gaijen

211
Followers
885
Following
4
Posts
05.12.2024
Joined
Posts Following

Latest posts by Ber Mtuni @gaijen

Get they asses Reese.

07.03.2026 17:01 👍 371 🔁 83 💬 4 📌 5

AI research:

Researcher: Claude, please eat ten hamburgers.

Claude: Done! I have eaten ten hamburgers. The first two were delicious, but after that I began to experience bloating and the meat sweats.

Headline: Anthropic Says Claude has "A Fully Developed Digestive System"

07.03.2026 00:20 👍 1014 🔁 306 💬 10 📌 9

Trump and Hegseth know exactly what happened with the bombing of the girls school. This “investigation” is a smoke screen because they know the attention span of our media cycles, so they will drag it out until they stop getting asked about it. Which is usually 48 hours or so.

07.03.2026 00:11 👍 2710 🔁 745 💬 108 📌 24
Post image

The leopards came for Dan.

07.03.2026 00:31 👍 6335 🔁 950 💬 473 📌 84

Creepy thing is that GPT didn’t invent this model—this is essentially the apotheosis of neoliberal ed reformists’ vision for schooling

06.03.2026 23:59 👍 30 🔁 11 💬 4 📌 0
Preview
Tracking Efforts To Restrict Or Ban Teens from Social Media Across the Globe Several countries have passed—and many others are considering—restrictions or bans on teens accessing social media platforms.

On March 6, the Indonesian government announced plans to “delay access” to social media accounts for children under 16. Learn more about these plans in our Global Social Media Age Restriction Tracker, which now tracks efforts to restrict or ban teens from social media in 42 countries.

06.03.2026 23:54 👍 9 🔁 6 💬 0 📌 0
Post image

Well, imagine that …

06.03.2026 15:28 👍 6718 🔁 2213 💬 269 📌 131

1) lol dying
2) how was Molly posting about this a month ago and everybody in crypto just figured it out in the last 24 hours

06.03.2026 17:26 👍 473 🔁 101 💬 8 📌 1
Preview
US grants waiver to allow India to buy Russian oil amid Iran war ‘Stopgap measure’ designed to keep oil flowing into global market as Middle East crisis disrupts crude shipments

When you Pottery Barn the world, without full control of the global supply of super glue, you have to improvise… www.theguardian.com/us-news/2026...

06.03.2026 03:06 👍 113 🔁 39 💬 8 📌 8
Post image

He fired Kristi Noem while she is speaking live. Lmao

05.03.2026 19:15 👍 3429 🔁 734 💬 243 📌 133
Post image

TRUMP SAYS HE NEEDS TO BE PERSONALLY INVOLVED IN SELECTING IRAN'S NEXT LEADER -AXIOS

05.03.2026 16:18 👍 110 🔁 18 💬 5 📌 6
Another member of the chat, William Bejerano — who tried to start a pro-life group at Miami Dade College — was the primary user of the n-word in the group. At one point, he posted a block of text calling for dozens of acts of extreme violence against Black people, who he referred to using the n-word, including crucifying, beheading and dissecting people. Bejerano hung up the phone when reached by the Herald.

Another member of the chat, William Bejerano — who tried to start a pro-life group at Miami Dade College — was the primary user of the n-word in the group. At one point, he posted a block of text calling for dozens of acts of extreme violence against Black people, who he referred to using the n-word, including crucifying, beheading and dissecting people. Bejerano hung up the phone when reached by the Herald.

Dariel Gonzalez, the College Republicans’ recruitment chairman at the time, responded in the chat: “How edgy.” “Ew you had colored professors?!” Gonzalez wrote at another point. “I reguse [sic] to be indoctrinated by the coloreds.” He told the group he used the term “colored” because, “I was told we cant say black anymore.” A couple days later, he added: “Avoid the coloreds like the plague.” He did not respond to a request for comment. The group chat members — which included some women — also frequently discussed sex, sometimes describing women as “whores” and at one point using the k-word, a slur for Jewish people, to describe women they avoid.

Dariel Gonzalez, the College Republicans’ recruitment chairman at the time, responded in the chat: “How edgy.” “Ew you had colored professors?!” Gonzalez wrote at another point. “I reguse [sic] to be indoctrinated by the coloreds.” He told the group he used the term “colored” because, “I was told we cant say black anymore.” A couple days later, he added: “Avoid the coloreds like the plague.” He did not respond to a request for comment. The group chat members — which included some women — also frequently discussed sex, sometimes describing women as “whores” and at one point using the k-word, a slur for Jewish people, to describe women they avoid.

Fun fact, fox news once tried to sic a mob on me because I wrote that being hispanic didn’t mean you couldn’t be a white supremacist, because race is subjective and many hispanics identify as white. Completely unrelated here’s how some Miami Republicans talk in private

05.03.2026 13:03 👍 1830 🔁 351 💬 50 📌 13

cannot stress enough how absurd it is that our government officials are mocking Iranian leadership for being delusional theocrats when they themselves genuinely believe this war is a mission from God to prepare the holy land for the Book of Revelations

03.03.2026 19:09 👍 1840 🔁 456 💬 71 📌 24
Preview
US troops were told war on Iran was ‘all part of God’s divine plan’, watchdog alleges Religious freedom group says 200 troops sent complaints of superiors using extremist Christian rhetoric to justify war

www.theguardian.com/world/2026/m...

03.03.2026 22:22 👍 45 🔁 16 💬 4 📌 3

It is unclear what a potential terrorist could do that is worse than what the country is already voluntarily doing

04.03.2026 10:24 👍 44 🔁 11 💬 2 📌 0
Post image

I have seen a lot of cursed stuff in my time in academia but this is among the *most* cursed.
Grammarly is generating miniature LLMs based on academic work so that users can have their writing ‘reviewed’ by experts like David Abulafia, who died less than two months ago.

03.03.2026 11:58 👍 3525 🔁 1542 💬 97 📌 284

As @kirahopkins.bsky.social & I will gesture towards later, no form of licensing seems to really offer much in the way of legal protection from LLMs. I am broadly critical of this, but I am more curious about what that says about the nature of IP knowledge for scholarly in digital contexts. #OxFOS26

04.03.2026 10:33 👍 2 🔁 1 💬 1 📌 0

just thinking about how USAID was dismantled under the rationale of cost savings but there's an unlimited budget for war

04.03.2026 06:43 👍 1931 🔁 484 💬 36 📌 24
On February 22, 2026, Naval Ravikant tweeted: "If you're unwilling to defend your country in time of war, then you're a tourist, not a citizen."

As of March 3, 2026, the tweet has been deleted.

On February 22, 2026, Naval Ravikant tweeted: "If you're unwilling to defend your country in time of war, then you're a tourist, not a citizen." As of March 3, 2026, the tweet has been deleted.

deleted his tweet as soon as an actual war popped off

04.03.2026 02:54 👍 10906 🔁 1088 💬 149 📌 40

They have religious exemptions. That means the parents are liars, tbc, because no major religions object to vaccines…but they do usually have moral objections to lying!

People lying ab their religion are putting other people’s kids & the broader community at risk in schools w/21% MMR uptake rates.

04.03.2026 07:07 👍 178 🔁 54 💬 14 📌 0

Ha! The original Lancet article on the dangers of reading in bed is here: doi.org/10.1016/S014...

03.03.2026 18:18 👍 128 🔁 67 💬 6 📌 11

Slight tweak: SCOTUS should be expanded to 17 justices, with 17 year term limits, one appointment per year—that way no single president can appoint a majority to the court.

(Also: if a justice departs before end of term, the remaining justices appoint a sitting federal judge to complete the term.)

04.03.2026 08:29 👍 17 🔁 3 💬 0 📌 1
Post image

New: Internal tension at the Associated Press over use of AI. One of the AP newsroom leaders leading the company's AI initiatives told staff that many editors preferred an AI-written article to a human one, and told them when it comes to using AI in the newsroom "resistance is futile."

04.03.2026 03:03 👍 581 🔁 171 💬 95 📌 307
Bloomberg.com article about Anthropic proposing to produce technology for voice controlled autonomous drone swarming for a pentagon contest

Bloomberg.com article about Anthropic proposing to produce technology for voice controlled autonomous drone swarming for a pentagon contest

Lest anyone people be too quick to give Anthropic the moral high ground

04.03.2026 03:46 👍 22 🔁 16 💬 0 📌 0
Last July, Anthropic agreed to ink a $200 million contract with the Pentagon, allowing the department broad-based use of its Claude model as the two prospective partners gradually worked out the final terms of engagement. Those were supposed to get etched last week—only for Anthropic to undergo a decisive test for its oft-professed ethical boundaries.

Defense Secretary Pete Hegseth demanded that his team be allowed to deploy Claude’s software in whatever manner they deemed pertinent, including applications for domestic surveillance and fully autonomous weaponry, which were “red lines” for Anthropic CEO Dario Amodei. On Friday afternoon, President Donald Trump ordered a six-month phaseout of all uses of Claude at the federal level. Hegseth then designated Anthropic a supply-chain risk to national security, all but forbidding any military contractors from doing future business with the company. A federal contract was subsequently bestowed upon Anthropic rival OpenAI, which unconvincingly claimed that it would try to safeguard tools like ChatGPT from use in population surveillance and autonomous weapons.

The fallout for Anthropic has been remarkable. It’s the first-ever American company to be deemed a supply-chain risk, which means it’s already lost several users across the federal government. But something even stranger emerged in the aftermath: a lotta liberal goodwill. Social media campaigners encouraged their followers, even the A.I. skeptics, to download Claude en masse. Extremely online observers came up with bizarre metaphors to characterize Anthropic’s heroism, and pushed Claude to the top of the app-store charts over the weekend. By Monday morning, there was a Claude service outage that Anthropic attributed to “unprecedented demand” for its products. Even Sen. Brian Schatz and Katy Perry got in on the whole thing. (The fact that American commandos had still used Claude to plan the Saturday strikes on Iran did not appear to faze many of these folks.) Meanwhil…

Last July, Anthropic agreed to ink a $200 million contract with the Pentagon, allowing the department broad-based use of its Claude model as the two prospective partners gradually worked out the final terms of engagement. Those were supposed to get etched last week—only for Anthropic to undergo a decisive test for its oft-professed ethical boundaries. Defense Secretary Pete Hegseth demanded that his team be allowed to deploy Claude’s software in whatever manner they deemed pertinent, including applications for domestic surveillance and fully autonomous weaponry, which were “red lines” for Anthropic CEO Dario Amodei. On Friday afternoon, President Donald Trump ordered a six-month phaseout of all uses of Claude at the federal level. Hegseth then designated Anthropic a supply-chain risk to national security, all but forbidding any military contractors from doing future business with the company. A federal contract was subsequently bestowed upon Anthropic rival OpenAI, which unconvincingly claimed that it would try to safeguard tools like ChatGPT from use in population surveillance and autonomous weapons. The fallout for Anthropic has been remarkable. It’s the first-ever American company to be deemed a supply-chain risk, which means it’s already lost several users across the federal government. But something even stranger emerged in the aftermath: a lotta liberal goodwill. Social media campaigners encouraged their followers, even the A.I. skeptics, to download Claude en masse. Extremely online observers came up with bizarre metaphors to characterize Anthropic’s heroism, and pushed Claude to the top of the app-store charts over the weekend. By Monday morning, there was a Claude service outage that Anthropic attributed to “unprecedented demand” for its products. Even Sen. Brian Schatz and Katy Perry got in on the whole thing. (The fact that American commandos had still used Claude to plan the Saturday strikes on Iran did not appear to faze many of these folks.) Meanwhil…

Anthropic Is Fully Supportive Of The US Military Using Claude In The War In Iran, Wants To Help Governments Go To War And Kill People, And Wants You To Believe Otherwise

Anthropic Is Fully Supportive Of The US Military Using Claude In The War In Iran, Wants To Help Governments Go To War And Kill People, And Wants You To Believe Otherwise

Two top pieces about Anthropic's really shocking PR win in falsely presenting themselves as a #Resistance hero

@nitishpahwa.com -> slate.com/technology/2...

@edzitron.com -> www.wheresyoured.at/the-ai-bubbl...

04.03.2026 09:25 👍 95 🔁 48 💬 2 📌 1
04.03.2026 00:59 👍 7502 🔁 1529 💬 90 📌 35
A screenshot of the Polymarket website showing a prediction market titled "Nuclear weapon detonation by...?" under the Geopolitics and Ukraine categories. The market displays three betting options with their current probabilities and "Buy Yes" or "Buy No" prices: "March 31" at 5%, "June 30" at 12%, and "Before 2027" at 22%. The total trading volume shown is over $843,000.

A screenshot of the Polymarket website showing a prediction market titled "Nuclear weapon detonation by...?" under the Geopolitics and Ukraine categories. The market displays three betting options with their current probabilities and "Buy Yes" or "Buy No" prices: "March 31" at 5%, "June 30" at 12%, and "Before 2027" at 22%. The total trading volume shown is over $843,000.

Polymarket has created a betting market on the use of nuclear weapons. Everyone involved in this should be put in prison for life.

04.03.2026 01:05 👍 8121 🔁 2407 💬 260 📌 525

I said this a while ago but it bears repeating:

The U.S. right is fully captured by a) christian fascist accelerationists seeking to start the Biblical Apocalypse, & b) those actively manipulating (a) for profit.

Everything they claim to care about will last only so long as it serves (a) or (b).

03.03.2026 22:22 👍 363 🔁 126 💬 6 📌 4

Absolutely too busy to lead something for this, but this call for papers looks great...

> CfP: Topical Collection on AI Resistance, Refusal, Reclamation and Reimagining: Ethical Imperatives and Emerging Practices

03.03.2026 15:46 👍 36 🔁 14 💬 1 📌 0
In 2024 the San Francisco-based Anthropic deployed its model across the US Department of War and other national security agencies to speed up war planning. Claude became part of a system developed by the war-tech company Palantir with the Pentagon to “dramatically improve intelligence analysis and enable officials in their decision-making processes”.

“The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought,” said Craig Jones, a senior lecturer in political geography at Newcastle University and an expert in kill chains. “So you’ve got scale and you’ve got speed, you’re [carrying out the] assassination-style strikes at the same time as you’re decapitating the regime’s ability to respond with all the aerial ballistic missiles. That might have taken days or weeks in historic wars. [Now] you’re doing everything at once.”

The latest AI systems can rapidly analyse mountains of information on potential targets from drone footage to telecommunications interceptions as well as human intelligence. Palantir’s system uses machine learning to identify and prioritise targets and recommend weaponry, accounting for stockpiles and previous performance against similar targets. It also uses automated reasoning to evaluate legal grounds for a strike.

In 2024 the San Francisco-based Anthropic deployed its model across the US Department of War and other national security agencies to speed up war planning. Claude became part of a system developed by the war-tech company Palantir with the Pentagon to “dramatically improve intelligence analysis and enable officials in their decision-making processes”. “The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought,” said Craig Jones, a senior lecturer in political geography at Newcastle University and an expert in kill chains. “So you’ve got scale and you’ve got speed, you’re [carrying out the] assassination-style strikes at the same time as you’re decapitating the regime’s ability to respond with all the aerial ballistic missiles. That might have taken days or weeks in historic wars. [Now] you’re doing everything at once.” The latest AI systems can rapidly analyse mountains of information on potential targets from drone footage to telecommunications interceptions as well as human intelligence. Palantir’s system uses machine learning to identify and prioritise targets and recommend weaponry, accounting for stockpiles and previous performance against similar targets. It also uses automated reasoning to evaluate legal grounds for a strike.

“This is the next era of military strategy and military technology,” said David Leslie, professor of ethics, technology and society at Queen Mary University of London, who has observed demonstrations of AI military systems. He also warned that reliance on AI can result in “cognitive off-loading”. Humans tasked with making a strike decision can feel detached from its consequences because the effort to think it through has been made by a machine.

On Saturday 165 people, many children, were killed in a missile strike that hit a school in southern Iran, according to state media. It appeared to be close to a military barracks and the UN called it “a grave violation of humanitarian law”. The US military has said it is looking into the reports.

“This is the next era of military strategy and military technology,” said David Leslie, professor of ethics, technology and society at Queen Mary University of London, who has observed demonstrations of AI military systems. He also warned that reliance on AI can result in “cognitive off-loading”. Humans tasked with making a strike decision can feel detached from its consequences because the effort to think it through has been made by a machine. On Saturday 165 people, many children, were killed in a missile strike that hit a school in southern Iran, according to state media. It appeared to be close to a military barracks and the UN called it “a grave violation of humanitarian law”. The US military has said it is looking into the reports.

In the days before the Iran strikes, the US administration had said it would banish Anthropic from its systems after it refused to allow its AI to be used for fully autonomous weapons or surveillance of US citizens. But it remains in use until it is phased out. Anthropic’s rival, OpenAI, quickly signed its own deal with the Pentagon for military use of its models.

“The advantage is in the speed of decision-making, the collapsing of planning from what might have taken days or weeks before to minutes or seconds,” said Leslie. “These systems produce a set of options for human decision makers but [they’ve] got a much narrower time band … to evaluate the recommendation.”

“The deployment of AI is expanding,” said Prerana Joshi, research fellow at the Royal United Services Institute, a defence thinktank. “It is being done across countries’ defence estates … across logistics, training, decision management, maintenance.”

She added: “AI is a technology that will allow decision makers, and anyone in that chain, to improve the productivity and efficiency of what they do. It’s a way of synthesising data at a much faster pace that is helpful to decision makers.”

In the days before the Iran strikes, the US administration had said it would banish Anthropic from its systems after it refused to allow its AI to be used for fully autonomous weapons or surveillance of US citizens. But it remains in use until it is phased out. Anthropic’s rival, OpenAI, quickly signed its own deal with the Pentagon for military use of its models. “The advantage is in the speed of decision-making, the collapsing of planning from what might have taken days or weeks before to minutes or seconds,” said Leslie. “These systems produce a set of options for human decision makers but [they’ve] got a much narrower time band … to evaluate the recommendation.” “The deployment of AI is expanding,” said Prerana Joshi, research fellow at the Royal United Services Institute, a defence thinktank. “It is being done across countries’ defence estates … across logistics, training, decision management, maintenance.” She added: “AI is a technology that will allow decision makers, and anyone in that chain, to improve the productivity and efficiency of what they do. It’s a way of synthesising data at a much faster pace that is helpful to decision makers.”

This article and the academics quoted are a stunning illustration of how both media and academia have fundamentally failed to recognise how a random number generator is being used to widen the already-fucking-wide permission space for mass murder

Both now helping that project

archive.ph/wip/RlMO5

03.03.2026 10:30 👍 113 🔁 46 💬 3 📌 3