happy to see a proper thin slice and seasoned tomato, but I say it doesn't matter what your wife says back in bed about your breath; add a bunch of finely diced white onion to that mayo
Those school girls and musicians from the Iranian navy were people who thought they’d be able to live.
mentioning someone who isn't right wing being ok listening to Candace Owens on Israel, was just about her personally. ok cool
some people are saying "this sounds like AI" but I have to stop you there. this is AI edited by an illiterate.
puff up, stand your ground and throw your feces at them is your best bet
appreciate you correcting me. I should have taken time to watch before responding. probably a good idea to have put that particular country in the title considering current events
Jeffrey Epstein was in bed with the Russians - but not with Israel of course, nothing to see there
sounds like a reflection of how obvious it is that there is a bipartisan capture of our ruling class by Israel, the Saudis, the gulf monarchs. It's getting pretty annoying to have to wade through endless accusations of bigotry + antisemitism just to be able to address something so in our face
*cough cough*
bsky.app/profile/ahjo...
OpenAI and…the United States Military…are teaming up to notify YOU that your son has been KILLED IN ACTION. Click here to speak with your son via AI one last time
Post See new posts ConversationSéamus Malekafzali @Seamus_Malek Israeli opposition leader Yair Lapid says that Israel should depopulate and destroy every village in southern Lebanon, with the Yellow Line in Gaza as the model, and that "it may not be pleasant to scrape off two or three Lebanese villages, but they brought this on themselves." 0:29 / 0:45 2:44 PM · Mar 5, 2026 · 126.6K Views
Well the good news is that the Israeli opposition isn't actually all that crazy right.
Picture unrelated btw
SCOOP: Proton Mail provided Swiss authorities with payment data that the FBI then used to determine who was allegedly behind an anonymous account affiliated with the Stop Cop City movement in Atlanta, according to a court record reviewed by 404 Media.
the European mind cannot comprehend the concept of the Good Rx prescription drug coupon
even when there are reporters doing their job and stories being broken, they have to launder their propaganda into it and try to shape public perception in service of ghouls. longest tradition of the paper
“The media will credit her fall to some shady no-bid contract she was behind, her use of a private jet, or administration rivals like Stephen Miller and whatever boring DC drama. But the real reason is obvious: public activism.”
the people of Minneapolis are a shining light to the rest of the country
In Gaza, Israel and Biden normalized war crimes beyond anything we have seen before.
And now, Israel and Trump are using that model against Iranian civilians.
At least 13 hospitals and health facilities hit during attacks on Iran, WHO says
www.theguardian.com/global-devel...
ICE has arrested and detained a Nashville journalist who reported stories critical of ICE. She’s married to a U.S. citizen and has been seeking asylum here after fleeing death threats in Colombia because of her journalism there.
They’ve already sent her to Louisiana.
Rubio: “Let me tell you, Iran is run by lunatics, religious fanatic lunatics.”
Guaranteed this shit is currently plagiarizing Delta Green discussions stolen off forums and reddit and absolutely fucking cooking someone's brain into planning on murdering cultists
lol, lmao even bsky.app/profile/gall...
one of the worst color combos possible for text
They’re still lying to us.
www.startribune.com/close-to-650...
thumbs down on that
congrats bsky.app/profile/atru...
that's all I am seeing, not sure about you
The funniest outcome of this chaos is that many people are very, very angry at Sam Altman and OpenAI, assuming that ChatGPT was somehow used in the conflict in Iran, and that Amodei and Anthropic somehow took a stand against a war it used as a means of generating revenue. In reality, we should loathe both Altman and Amodei for their natural jingoism and continual deception. Amodei and Anthropic timed their defiance of the Department of Defense to make it seem like its “red lines” were related to the war. I think it’s good they have those red lines, but remember, those red lines do not involve stopping a war that threatens the lives of millions of people. Amodei supports that. Anthropic both supports and enables that. Altman, on the other hand, is a slimy little creep that wants you to believe that he signed the same deal as Anthropic wanted, but actually signed one that allows “any lawful use.” And in both cases, these men are both enthusiastic to work with a part of the government calling itself the Department of War. Both of them are willing and able to provide technology that will surveil or kill people, and while Amodei may have blushed at something to do with autonomous weapons or domestic surveillance, neither appear to have an issue with the actual harms that their models perpetuate. Remember: Anthropic just pitched its technology as part of an ongoing Department of Defense drone swarm contest. It loves war! Its only issue was that there wasn’t a human in the loop somewhere.
Both OpenAI and Anthropic's specious hype cycles and lies about LLMs will lead to death and destruction at the hands of unreliable, hallucination-prone LLMs.
Untrustworthy "AI" is being used to distance warmongers from their decisions.
www.wheresyoured.at/the-ai-bubble-is-an-information-war/
Stinky, nasty, duplicitous conman Sam Altman smelled blood amidst these negotiations and went in for the kill, striking a deal on Friday with the Pentagon for ChatGPT and OpenAI’s other models to be used in the military’s classified systems, with initial reports saying that it had “similar guardrails to those requested by Anthropic.” In a post about the contract, Clammy Sammy said that the DoD displayed “a deep respect for safety and a desire to partner to achieve the best possible outcome,” adding: AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. Undersecretary Jeremy Levin almost immediately countered this notion, saying that the contract “...flows from the touchstone of “all lawful use.” This quickly created a diplomatic incident where OpenAI decided that the best time to discuss the contract was an entire Saturday and that the way to discuss it was posting. It shared some details on the contract, which included the fatal phrase that the Department of Defense “...may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.”
Let's be explicit: Anthropic fully, enthusiastically supports war, and Claude was used in some capacity in the War In Iran, even after it was banned. Now OpenAI has signed with the Pentagon, supporting "all lawful uses" in war on ChatGPT.
www.wheresyoured.at/the-ai-bubble-is-an-information-war/
Let’s be explicit: Anthropic’s Claude (and its various models) are fully approved for use in the military, and, to quote its own blog post, “has supported American warfighters since June 2024 and has every intention of continuing to do so.” To be explicit about what “support” means, I’ll quote the Wall Street Journal: Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools. Commands around the world, including U.S. Central Command in the Middle East, use Anthropic’s Claude AI tool, people familiar with the matter confirmed. Centcom declined to comment about specific systems being used in its ongoing operation against Iran. The command uses the tool for intelligence assessments, target identification and simulating battle scenarios even as tension between the company and Pentagon ratcheted up, the people said, highlighting how embedded the AI tools are in military operations. In reality, Claude is likely being used to go through a bunch of images and to answer questions about particular scenarios. There is very little specialized military training data, and I imagine many of the demands for “full access to powerful AI” have come as a result of Amodei and Altman’s bloviating about the “incredible power of AI.” More than likely, Centcom and the rest of the military pepper it with questions that allow it to justify acts that blow up schools, kill US servicemembers and threaten to continue the forever war that has killed millions of people and thrown the Middle East into near-permanent disarray. Nevertheless, Dario Amodei gets fawning press about being a patriot that deeply cares about safety less than a week after Anthropic dropped its safety pledge to not train an AI system unless it could guarantee in advance that its safety measures were accurate.
The reality of the negotiations was a little simpler, per the Atlantic. The Department of Defense had agreed to terms around not using Claude for mass domestic surveillance or fully autonomous killing machines (the former of which it’s not particularly good at and the latter of which it flat out cannot do), but, well, actually very much intended to use Claude for domestic surveillance anyway: On Friday afternoon, Anthropic learned that the Pentagon still wanted to use the company’s AI to analyze bulk data collected from Americans. That could include information such as the questions you ask your favorite chatbot, your Google search history, your GPS-tracked movements, and your credit-card transactions, all of which could be cross-referenced with other details about your life. Anthropic’s leadership told Hegseth’s team that was a bridge too far, and the deal fell apart. Now, I’m about to give you another quote about autonomous weapons, and I really want you to pay attention to where I emphasize certain things for a subtle clue about Anthropic’s ethics: Anthropic had not argued that such weapons should not exist. To the contrary, the company had offered to work directly with the Pentagon to improve their reliability. Just as self-driving cars are now in some cases safer than those driven by humans, killer drones may some day be more accurate than a human operator, and less likely to kill bystanders during an attack. But for now, Anthropic’s leaders believe that their AI hasn’t yet reached that threshold. They worry that the models could lead the machines to fire indiscriminately or inaccurately, or otherwise endanger civilians or even American troops themselves. So, let’s be clear: Anthropic wants to help the military make more accurate kill drones, and in fact loves them. One might take this to be somewhat altruistic — Dario Amodei doesn’t want the US military to hit civilians — but remember: Anthropic is totally fine with the US military using Claude for anything …
The AI industry wants us to believe it’s more successful than it is, that LLMs are more powerful than they are, and that AI labs are "safety focused" when both Anthropic and OpenAI enthusiastically support using their tech to kill people.
www.wheresyoured.at/the-ai-bubble-is-an-information-war/