Get they asses Reese.
Get they asses Reese.
AI research:
Researcher: Claude, please eat ten hamburgers.
Claude: Done! I have eaten ten hamburgers. The first two were delicious, but after that I began to experience bloating and the meat sweats.
Headline: Anthropic Says Claude has "A Fully Developed Digestive System"
Trump and Hegseth know exactly what happened with the bombing of the girls school. This “investigation” is a smoke screen because they know the attention span of our media cycles, so they will drag it out until they stop getting asked about it. Which is usually 48 hours or so.
The leopards came for Dan.
Creepy thing is that GPT didn’t invent this model—this is essentially the apotheosis of neoliberal ed reformists’ vision for schooling
On March 6, the Indonesian government announced plans to “delay access” to social media accounts for children under 16. Learn more about these plans in our Global Social Media Age Restriction Tracker, which now tracks efforts to restrict or ban teens from social media in 42 countries.
Well, imagine that …
1) lol dying
2) how was Molly posting about this a month ago and everybody in crypto just figured it out in the last 24 hours
When you Pottery Barn the world, without full control of the global supply of super glue, you have to improvise… www.theguardian.com/us-news/2026...
He fired Kristi Noem while she is speaking live. Lmao
TRUMP SAYS HE NEEDS TO BE PERSONALLY INVOLVED IN SELECTING IRAN'S NEXT LEADER -AXIOS
Another member of the chat, William Bejerano — who tried to start a pro-life group at Miami Dade College — was the primary user of the n-word in the group. At one point, he posted a block of text calling for dozens of acts of extreme violence against Black people, who he referred to using the n-word, including crucifying, beheading and dissecting people. Bejerano hung up the phone when reached by the Herald.
Dariel Gonzalez, the College Republicans’ recruitment chairman at the time, responded in the chat: “How edgy.” “Ew you had colored professors?!” Gonzalez wrote at another point. “I reguse [sic] to be indoctrinated by the coloreds.” He told the group he used the term “colored” because, “I was told we cant say black anymore.” A couple days later, he added: “Avoid the coloreds like the plague.” He did not respond to a request for comment. The group chat members — which included some women — also frequently discussed sex, sometimes describing women as “whores” and at one point using the k-word, a slur for Jewish people, to describe women they avoid.
Fun fact, fox news once tried to sic a mob on me because I wrote that being hispanic didn’t mean you couldn’t be a white supremacist, because race is subjective and many hispanics identify as white. Completely unrelated here’s how some Miami Republicans talk in private
cannot stress enough how absurd it is that our government officials are mocking Iranian leadership for being delusional theocrats when they themselves genuinely believe this war is a mission from God to prepare the holy land for the Book of Revelations
It is unclear what a potential terrorist could do that is worse than what the country is already voluntarily doing
I have seen a lot of cursed stuff in my time in academia but this is among the *most* cursed.
Grammarly is generating miniature LLMs based on academic work so that users can have their writing ‘reviewed’ by experts like David Abulafia, who died less than two months ago.
As @kirahopkins.bsky.social & I will gesture towards later, no form of licensing seems to really offer much in the way of legal protection from LLMs. I am broadly critical of this, but I am more curious about what that says about the nature of IP knowledge for scholarly in digital contexts. #OxFOS26
just thinking about how USAID was dismantled under the rationale of cost savings but there's an unlimited budget for war
On February 22, 2026, Naval Ravikant tweeted: "If you're unwilling to defend your country in time of war, then you're a tourist, not a citizen." As of March 3, 2026, the tweet has been deleted.
deleted his tweet as soon as an actual war popped off
They have religious exemptions. That means the parents are liars, tbc, because no major religions object to vaccines…but they do usually have moral objections to lying!
People lying ab their religion are putting other people’s kids & the broader community at risk in schools w/21% MMR uptake rates.
Ha! The original Lancet article on the dangers of reading in bed is here: doi.org/10.1016/S014...
Slight tweak: SCOTUS should be expanded to 17 justices, with 17 year term limits, one appointment per year—that way no single president can appoint a majority to the court.
(Also: if a justice departs before end of term, the remaining justices appoint a sitting federal judge to complete the term.)
New: Internal tension at the Associated Press over use of AI. One of the AP newsroom leaders leading the company's AI initiatives told staff that many editors preferred an AI-written article to a human one, and told them when it comes to using AI in the newsroom "resistance is futile."
Bloomberg.com article about Anthropic proposing to produce technology for voice controlled autonomous drone swarming for a pentagon contest
Lest anyone people be too quick to give Anthropic the moral high ground
Last July, Anthropic agreed to ink a $200 million contract with the Pentagon, allowing the department broad-based use of its Claude model as the two prospective partners gradually worked out the final terms of engagement. Those were supposed to get etched last week—only for Anthropic to undergo a decisive test for its oft-professed ethical boundaries. Defense Secretary Pete Hegseth demanded that his team be allowed to deploy Claude’s software in whatever manner they deemed pertinent, including applications for domestic surveillance and fully autonomous weaponry, which were “red lines” for Anthropic CEO Dario Amodei. On Friday afternoon, President Donald Trump ordered a six-month phaseout of all uses of Claude at the federal level. Hegseth then designated Anthropic a supply-chain risk to national security, all but forbidding any military contractors from doing future business with the company. A federal contract was subsequently bestowed upon Anthropic rival OpenAI, which unconvincingly claimed that it would try to safeguard tools like ChatGPT from use in population surveillance and autonomous weapons. The fallout for Anthropic has been remarkable. It’s the first-ever American company to be deemed a supply-chain risk, which means it’s already lost several users across the federal government. But something even stranger emerged in the aftermath: a lotta liberal goodwill. Social media campaigners encouraged their followers, even the A.I. skeptics, to download Claude en masse. Extremely online observers came up with bizarre metaphors to characterize Anthropic’s heroism, and pushed Claude to the top of the app-store charts over the weekend. By Monday morning, there was a Claude service outage that Anthropic attributed to “unprecedented demand” for its products. Even Sen. Brian Schatz and Katy Perry got in on the whole thing. (The fact that American commandos had still used Claude to plan the Saturday strikes on Iran did not appear to faze many of these folks.) Meanwhil…
Anthropic Is Fully Supportive Of The US Military Using Claude In The War In Iran, Wants To Help Governments Go To War And Kill People, And Wants You To Believe Otherwise
Two top pieces about Anthropic's really shocking PR win in falsely presenting themselves as a #Resistance hero
@nitishpahwa.com -> slate.com/technology/2...
@edzitron.com -> www.wheresyoured.at/the-ai-bubbl...
A screenshot of the Polymarket website showing a prediction market titled "Nuclear weapon detonation by...?" under the Geopolitics and Ukraine categories. The market displays three betting options with their current probabilities and "Buy Yes" or "Buy No" prices: "March 31" at 5%, "June 30" at 12%, and "Before 2027" at 22%. The total trading volume shown is over $843,000.
Polymarket has created a betting market on the use of nuclear weapons. Everyone involved in this should be put in prison for life.
I said this a while ago but it bears repeating:
The U.S. right is fully captured by a) christian fascist accelerationists seeking to start the Biblical Apocalypse, & b) those actively manipulating (a) for profit.
Everything they claim to care about will last only so long as it serves (a) or (b).
Absolutely too busy to lead something for this, but this call for papers looks great...
> CfP: Topical Collection on AI Resistance, Refusal, Reclamation and Reimagining: Ethical Imperatives and Emerging Practices
In 2024 the San Francisco-based Anthropic deployed its model across the US Department of War and other national security agencies to speed up war planning. Claude became part of a system developed by the war-tech company Palantir with the Pentagon to “dramatically improve intelligence analysis and enable officials in their decision-making processes”. “The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought,” said Craig Jones, a senior lecturer in political geography at Newcastle University and an expert in kill chains. “So you’ve got scale and you’ve got speed, you’re [carrying out the] assassination-style strikes at the same time as you’re decapitating the regime’s ability to respond with all the aerial ballistic missiles. That might have taken days or weeks in historic wars. [Now] you’re doing everything at once.” The latest AI systems can rapidly analyse mountains of information on potential targets from drone footage to telecommunications interceptions as well as human intelligence. Palantir’s system uses machine learning to identify and prioritise targets and recommend weaponry, accounting for stockpiles and previous performance against similar targets. It also uses automated reasoning to evaluate legal grounds for a strike.
“This is the next era of military strategy and military technology,” said David Leslie, professor of ethics, technology and society at Queen Mary University of London, who has observed demonstrations of AI military systems. He also warned that reliance on AI can result in “cognitive off-loading”. Humans tasked with making a strike decision can feel detached from its consequences because the effort to think it through has been made by a machine. On Saturday 165 people, many children, were killed in a missile strike that hit a school in southern Iran, according to state media. It appeared to be close to a military barracks and the UN called it “a grave violation of humanitarian law”. The US military has said it is looking into the reports.
In the days before the Iran strikes, the US administration had said it would banish Anthropic from its systems after it refused to allow its AI to be used for fully autonomous weapons or surveillance of US citizens. But it remains in use until it is phased out. Anthropic’s rival, OpenAI, quickly signed its own deal with the Pentagon for military use of its models. “The advantage is in the speed of decision-making, the collapsing of planning from what might have taken days or weeks before to minutes or seconds,” said Leslie. “These systems produce a set of options for human decision makers but [they’ve] got a much narrower time band … to evaluate the recommendation.” “The deployment of AI is expanding,” said Prerana Joshi, research fellow at the Royal United Services Institute, a defence thinktank. “It is being done across countries’ defence estates … across logistics, training, decision management, maintenance.” She added: “AI is a technology that will allow decision makers, and anyone in that chain, to improve the productivity and efficiency of what they do. It’s a way of synthesising data at a much faster pace that is helpful to decision makers.”
This article and the academics quoted are a stunning illustration of how both media and academia have fundamentally failed to recognise how a random number generator is being used to widen the already-fucking-wide permission space for mass murder
Both now helping that project
archive.ph/wip/RlMO5