IHTI's Avatar

IHTI

@1ht1

Mathematics & CS Researcher

125
Followers
148
Following
35
Posts
13.03.2024
Joined
Posts Following

Latest posts by IHTI @1ht1

"AI is excellent at tasks outside of my sphere of expertise that I can't evaluate in-depth, but terrible at things I do and can identify the flaws with." is the universal LLM experience. It's good at generating code, but the code it produces isn't inherently optimal or often even good. Same w/prose.

04.03.2026 14:46 👍 53 🔁 6 💬 1 📌 3

I loved this recent comment by ExtremelyBitter‬ (@extremelybitter.bsky.social):

08.03.2026 01:32 👍 6 🔁 2 💬 0 📌 0

someone at the pentagon frantically typing “Claude, open the strait of Hormuz for me, quickest possible strategy, make no mistakes.”

09.03.2026 04:33 👍 7803 🔁 1197 💬 158 📌 63

Reported this two weeks ago

06.03.2026 20:24 👍 686 🔁 93 💬 16 📌 5
07.03.2026 19:54 👍 1469 🔁 209 💬 20 📌 1

[guttural groan] Stove! We’re going to touch it - can we touch it? Are we allowed to touch it? - there’s only one way to tell if the stove has gone on and that’s to use your hand

09.03.2026 01:57 👍 562 🔁 43 💬 11 📌 0
Preview
Polymarket Pulls Bet on Nuclear Detonation in 2026 ‘How ghoulish.’ The depravity economy moves into the nuclear war business.

Polymarket's CEO Shayne Coplan has repeatedly called the site "the future of news." So why, if the platform has allowed other nuclear war markets and bets on the Iran war, did they pull this wager?
404media.co/polymarket-p...

07.03.2026 19:35 👍 126 🔁 26 💬 14 📌 11

The gig economy -> the surveillance economy -> the depravity economy....

It's all just the same economy. This is how our economy works. It's all just capitalism and the only fix is going to come from acknowledging the system doesn't work.

07.03.2026 20:00 👍 62 🔁 15 💬 0 📌 1

The DOJ released evidence, which the FBI deemed credible, that the President of the United States sexually and physically assaulted a 13 year old girl AND she was afraid to talk about it because Trump would kill her—and it’s somehow not the biggest scandal in American history?

09.03.2026 00:46 👍 8394 🔁 3091 💬 141 📌 115
Preview
Current and former Block workers say AI can’t do their jobs after Jack Dorsey’s mass layoffs: ‘You can’t really AI that’ The CEO said he cut the company’s workforce by 4,000 people – almost in half – because of gains in AI productivity

Current and former Block workers say AI can’t do their jobs after Jack Dorsey’s mass layoffs: ‘You can’t really AI that’ | AI (artificial intelligence) | The Guardian www.theguardian.com/technology/2...

08.03.2026 13:50 👍 4 🔁 5 💬 0 📌 0

All these folks getting hooked on subsidized LLM tokens are gonna be in for a rude awakening when that bill comes due

08.03.2026 18:29 👍 2 🔁 1 💬 0 📌 0
Preview
People Have the Right to Refuse AI Britt Paris is the author of Radical Infrastructure: Imagining the Internet from the Ground Up, a new book published by the University of California Press.

You don’t have to participate in AI’s massive hype inflation, writes critical informatics scholar Britt S. Paris. You have a right to refuse the ‘inevitable’.

08.03.2026 19:33 👍 89 🔁 22 💬 1 📌 1
Whenever someone starts talking about the 'free market,' it's a good idea to look around for the many with the gun. He's never far away."

Whenever someone starts talking about the 'free market,' it's a good idea to look around for the many with the gun. He's never far away."

From David Graeber's Utopia Of Rules.

08.03.2026 20:08 👍 146 🔁 56 💬 1 📌 3
Video thumbnail

Chris Murphy: "I think it's likely the United States that carried out this attack on this school. I think it's unforgivable under any circumstances, but the fact this was one of our first targeting decisions speaks to the incompetence of our leadership at the Dept of Defense."

08.03.2026 13:38 👍 24051 🔁 7399 💬 880 📌 394
Preview
Exclusive: Researchers trick a bot that prescribes meds The state of Utah is running a pilot with Doctronic's AI system to refill some prescriptions.

The jailbreak was done on the company’s public bot, not the one inside the state system, but researchers “were able to make the bot spread vaccine conspiracy theories, triple a patient's prescribed pain medication dosage, and recommend methamphetamine as treatment.”

04.03.2026 02:09 👍 131 🔁 55 💬 3 📌 13

kamala harris endorsement of jasmine crocket flopped, she really needs to look in the mirror and give up on politics lol

04.03.2026 03:45 👍 3741 🔁 360 💬 67 📌 20
Preview
Kash Patel’s latest firings ousted agents with expertise in Iran The FBI director gutted a specialized, global espionage unit of counterintelligence agents, just days before Operation Epic Fury.

When FBI Director Kash Patel fired a dozen FBI agents and staff last week for their role in the classified documents investigation of Donald Trump, he targeted an elite counter espionage unit that investigates threats from foreign adversaries and specializes in Iran www.ms.now/news/kash-pa...

03.03.2026 15:03 👍 2309 🔁 1014 💬 115 📌 130
Preview
The AI Bubble Is An Information War Editor's Note: Apologies if you received this email twice - we had an issue with our mail server that meant it was hitting spam in many cases! Hi! If you like this piece and want to support my work, ...

Free newsletter: The AI bubble's info war demands that we believe in things that aren’t true, like the economic fundamentals of the AI industry are perfectly sound, or that Sam Altman and Dario Amodei are anything other than warmongers.
www.wheresyoured.at/the-ai-bubble-is-an-information-war/

03.03.2026 18:26 👍 925 🔁 218 💬 16 📌 7

can't wait for more of these things to happen as oracle begins to collapse

03.03.2026 22:52 👍 507 🔁 59 💬 8 📌 1

this shit is driving me insane. OpenAI sucks don't get me wrong but Anthropic quite literally supports and provides software for the war in Iran. They are monetizing death and destruction. OpenAI's dumbass strategy of "rushing in to take Anthropic's business" allowed Dario Amodei to lie!

04.03.2026 00:04 👍 333 🔁 57 💬 7 📌 2

Generative AI isn’t intelligent, but it allows people to pretend that it is, especially when the people selling the software — Altman and Amodei — so regularly overstate what it can do. 

By giving warmongers and jingoists the cover to “trust” this “authoritative” service — whether or not that’s the case, they can simply point to the specious press — the ethical concern of whether or not an attack was ethical or not is now, whenever any western democracy needs it to be, something that can be handed off to Claude, and justified with the cold, logical framing of “intelligence” and “data.” 

None of this would be possible without the consistent repetition of the falsehoods peddled by OpenAI and Anthropic. Without this endless puffery and overstatements about the “power of AI,” we wouldn’t have armed conflicts dictated by what a chatbot can burp up from the files it’s fed. The deaths that follow will be a direct result of those who choose to continue to lie about what an LLM does. 

Make no mistake, LLMs are still incapable of unique ideas and are still, outside of coding (which requires massive subsidies to even be kind of useful), questionable in their efficacy and untrustworthy in their outputs. Nothing about the military’s use of Claude makes it more useful or powerful than it was before — they’re probably just loading files into it and asking it long questions about things and going “huh” at the end. 

The vulgar dishonesty of Altman and Amodei puts blood on both of their hands, and it’s the duty of every single member of the media to remind people of this whenever you discuss their software. 

I get that you probably think I’m being dramatic, but tell me — do you think that the US military would’ve trusted LLMs had they not been marketed as capable of basically anything? Do you think any of this would’ve happened had there been an honest, realistic discussion of what AI can do today, and what it might do tomorrow? 

I guess we’ll never know, and the people blown …

Generative AI isn’t intelligent, but it allows people to pretend that it is, especially when the people selling the software — Altman and Amodei — so regularly overstate what it can do. By giving warmongers and jingoists the cover to “trust” this “authoritative” service — whether or not that’s the case, they can simply point to the specious press — the ethical concern of whether or not an attack was ethical or not is now, whenever any western democracy needs it to be, something that can be handed off to Claude, and justified with the cold, logical framing of “intelligence” and “data.” None of this would be possible without the consistent repetition of the falsehoods peddled by OpenAI and Anthropic. Without this endless puffery and overstatements about the “power of AI,” we wouldn’t have armed conflicts dictated by what a chatbot can burp up from the files it’s fed. The deaths that follow will be a direct result of those who choose to continue to lie about what an LLM does. Make no mistake, LLMs are still incapable of unique ideas and are still, outside of coding (which requires massive subsidies to even be kind of useful), questionable in their efficacy and untrustworthy in their outputs. Nothing about the military’s use of Claude makes it more useful or powerful than it was before — they’re probably just loading files into it and asking it long questions about things and going “huh” at the end. The vulgar dishonesty of Altman and Amodei puts blood on both of their hands, and it’s the duty of every single member of the media to remind people of this whenever you discuss their software. I get that you probably think I’m being dramatic, but tell me — do you think that the US military would’ve trusted LLMs had they not been marketed as capable of basically anything? Do you think any of this would’ve happened had there been an honest, realistic discussion of what AI can do today, and what it might do tomorrow? I guess we’ll never know, and the people blown …

The vulgar dishonesty of Altman and Amodei puts blood on both of their hands, and it’s the duty of every single member of the media to remind people of this whenever you discuss their software. They both love - and monetize! - war.

www.wheresyoured.at/the-ai-bubble-is-an-information-war/

03.03.2026 18:26 👍 128 🔁 27 💬 4 📌 0
The funniest outcome of this chaos is that many people are very, very angry at Sam Altman and OpenAI, assuming that ChatGPT was somehow used in the conflict in Iran, and that Amodei and Anthropic somehow took a stand against a war it used as a means of generating revenue. 

In reality, we should loathe both Altman and Amodei for their natural jingoism and continual deception. Amodei and Anthropic timed their defiance of the Department of Defense to make it seem like its “red lines” were related to the war. I think it’s good they have those red lines, but remember, those red lines do not involve stopping a war that threatens the lives of millions of people. Amodei supports that. Anthropic both supports and enables that. 

Altman, on the other hand, is a slimy little creep that wants you to believe that he signed the same deal as Anthropic wanted, but actually signed one that allows “any lawful use.” 

And in both cases, these men are both enthusiastic to work with a part of the government calling itself the Department of War. Both of them are willing and able to provide technology that will surveil or kill people, and while Amodei may have blushed at something to do with autonomous weapons or domestic surveillance, neither appear to have an issue with the actual harms that their models perpetuate. Remember: Anthropic just pitched its technology as part of an ongoing Department of Defense drone swarm contest. It loves war! Its only issue was that there wasn’t a human in the loop somewhere.

The funniest outcome of this chaos is that many people are very, very angry at Sam Altman and OpenAI, assuming that ChatGPT was somehow used in the conflict in Iran, and that Amodei and Anthropic somehow took a stand against a war it used as a means of generating revenue. In reality, we should loathe both Altman and Amodei for their natural jingoism and continual deception. Amodei and Anthropic timed their defiance of the Department of Defense to make it seem like its “red lines” were related to the war. I think it’s good they have those red lines, but remember, those red lines do not involve stopping a war that threatens the lives of millions of people. Amodei supports that. Anthropic both supports and enables that. Altman, on the other hand, is a slimy little creep that wants you to believe that he signed the same deal as Anthropic wanted, but actually signed one that allows “any lawful use.” And in both cases, these men are both enthusiastic to work with a part of the government calling itself the Department of War. Both of them are willing and able to provide technology that will surveil or kill people, and while Amodei may have blushed at something to do with autonomous weapons or domestic surveillance, neither appear to have an issue with the actual harms that their models perpetuate. Remember: Anthropic just pitched its technology as part of an ongoing Department of Defense drone swarm contest. It loves war! Its only issue was that there wasn’t a human in the loop somewhere.

Both OpenAI and Anthropic's specious hype cycles and lies about LLMs will lead to death and destruction at the hands of unreliable, hallucination-prone LLMs.

Untrustworthy "AI" is being used to distance warmongers from their decisions.

www.wheresyoured.at/the-ai-bubble-is-an-information-war/

03.03.2026 18:26 👍 132 🔁 36 💬 5 📌 1
Stinky, nasty, duplicitous conman Sam Altman smelled blood amidst these negotiations and went in for the kill, striking a deal on Friday with the Pentagon for ChatGPT and OpenAI’s other models to be used in the military’s classified systems, with initial reports saying that it had “similar guardrails to those requested by Anthropic.” 

In a post about the contract, Clammy Sammy said that the DoD displayed “a deep respect for safety and a desire to partner to achieve the best possible outcome,” adding:

AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems.  The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
Undersecretary Jeremy Levin almost immediately countered this notion, saying that the contract “...flows from the touchstone of “all lawful use.” This quickly created a diplomatic incident where OpenAI decided that the best time to discuss the contract was an entire Saturday and that the way to discuss it was posting. It shared some details on the contract, which included the fatal phrase that the Department of Defense “...may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.”

Stinky, nasty, duplicitous conman Sam Altman smelled blood amidst these negotiations and went in for the kill, striking a deal on Friday with the Pentagon for ChatGPT and OpenAI’s other models to be used in the military’s classified systems, with initial reports saying that it had “similar guardrails to those requested by Anthropic.” In a post about the contract, Clammy Sammy said that the DoD displayed “a deep respect for safety and a desire to partner to achieve the best possible outcome,” adding: AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. Undersecretary Jeremy Levin almost immediately countered this notion, saying that the contract “...flows from the touchstone of “all lawful use.” This quickly created a diplomatic incident where OpenAI decided that the best time to discuss the contract was an entire Saturday and that the way to discuss it was posting. It shared some details on the contract, which included the fatal phrase that the Department of Defense “...may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.”

Let's be explicit: Anthropic fully, enthusiastically supports war, and Claude was used in some capacity in the War In Iran, even after it was banned. Now OpenAI has signed with the Pentagon, supporting "all lawful uses" in war on ChatGPT.
www.wheresyoured.at/the-ai-bubble-is-an-information-war/

03.03.2026 18:26 👍 95 🔁 23 💬 3 📌 4

Make no mistake: Dario Amodei is a full-throated supporter of war, and Claude is being used in Iran as we speak. Dario Amodei believes part of Anthropic's purpose is "defeating autocratic adversaries," and otherwise "doesn't have views."
www.wheresyoured.at/the-ai-bubbl...

04.03.2026 00:09 👍 283 🔁 62 💬 7 📌 2

This is the era to take detailed notes on who the Business Idiots are. Every single person who bought into this at this scale has outed themselves as an untrustworthy individual with questionable taste

www.wheresyoured.at/the-era-of-t...

04.03.2026 04:04 👍 586 🔁 174 💬 8 📌 5

There are three Horsemen of the AIpocalypse: First, a data center debt deal collapses - it fails to come together and the project is canceled. Second, a data center in construction falls apart, running out of money before it’s complete. Third, a data center fully built runs out of money and dies.

04.03.2026 04:17 👍 536 🔁 66 💬 11 📌 3

it’s honestly unreal how today, trump once again said “americans will die, it is what it is” he really thought that line was a banger from yesterday

02.03.2026 16:33 👍 2014 🔁 176 💬 39 📌 4
Trump Wins $60 On Kalshi Betting He’ll Bomb Iran

Trump Wins $60 On Kalshi Betting He’ll Bomb Iran

Trump Wins $60 On Kalshi Betting He’ll Bomb Iran https://theonion.com/trump-wins-60-on-kalshi-betting-hell-bomb-iran/

02.03.2026 22:28 👍 1817 🔁 205 💬 37 📌 17
Preview
Amazon Data Centers on Fire After Iranian Missile Strikes on Dubai Some AWS services are down in the Middle East. Recovery is unclear as it requires 'careful assessment to ensure the safety of our operators,' according to Amazon.

NEW: Around 60 services tied to Amazon Web Services are down in the region, affecting web traffic in the UAE and Bahrain. The outage comes following Iranian attacks on the UAE as retaliation for US and Israeli strikes on Iran.

02.03.2026 16:02 👍 240 🔁 106 💬 8 📌 17

prohibits 'deliberate' tracking and surveillance, and "intentionally" using them for domestic surveillance. very easy to get around!

03.03.2026 01:37 👍 469 🔁 79 💬 12 📌 4