Max Read's Avatar

Max Read

@maxread.info

http://maxread.substack.com

13,539
Followers
243
Following
680
Posts
11.05.2023
Joined
Posts Following

Latest posts by Max Read @maxread.info

Preview
Subscriber Survey Greetings from Read Max HQ! Thank you for your support of Read Max, whether as a paid subscriber, or as a free subscriber, or as a guy who clicked a link and ended up here. I started this newsletter ...

I've accidentally backed myself into a career as a content creator and need your help figuring out what to do next. Please take this short survey, even if you don't subscribe! bit.ly/4s4drzz

05.03.2026 14:59 👍 7 🔁 0 💬 1 📌 0
Post image
04.03.2026 20:45 👍 2 🔁 1 💬 1 📌 0

i want one that says PRIMCETON

04.03.2026 20:37 👍 0 🔁 0 💬 1 📌 0

watch here! open.substack.com/live-stream/...

04.03.2026 19:02 👍 7 🔁 1 💬 0 📌 0

going live on substack w/ @lioneltrolling.bsky.social at 2

04.03.2026 18:52 👍 7 🔁 1 💬 0 📌 1

yeah exactly. one reason he keeps talking about grok's basedness is that it's the only area in which grok is plausibly on the cutting edge. unfortunately probably not a entirely terrible recruiting strategy, lots of ai guys would love to work on the cutting edge of racism, but not good enough.

04.03.2026 16:07 👍 6 🔁 0 💬 2 📌 0
Preview
Burnout and Elon Musk’s politics spark exodus from senior xAI, Tesla staff Disillusionment with Musk's activism, strategic pivots, and mass layoffs cause churn.

this report cites "burnout" and "musk's erratic behavior" but that's always been true of his management style—the big difference between an xai worker in 2026 and a spacex worker in 2016 is that there's somewhere else the xai worker can go that will pay them as much to do basically the same thing

04.03.2026 16:02 👍 34 🔁 4 💬 1 📌 0

the conditions that made elon successful at tesla and spacex and starlink (zirp, nonexistent sectoral competition, not addicted to ketamine) no longer hold true. its hard to be a failure at $700b net worth but i dont xai keeping pace

04.03.2026 15:57 👍 142 🔁 22 💬 4 📌 1

watching a bunch of oai and anthro employees on twitter confront the reality of power/law/bureaucracy/politics in the wake of the DoD stuff last week is interesting. i dont thnk frontier lab workers have a great sense of 'the real world' and their predictions w/r/t it should be treated w/ skepticism

04.03.2026 14:30 👍 51 🔁 3 💬 1 📌 3

important to read max on the silicon valley political divide between awful nerds and evil nerds

01.03.2026 15:31 👍 9 🔁 2 💬 0 📌 0

funny memory about this story: Balaji reached out privately to say how much the boys at a16z loved it, after which they all spent the next few years letting Twitter drive them completely insane www.nytimes.com/2018/08/15/m...

03.03.2026 02:20 👍 71 🔁 10 💬 2 📌 0

Cry "Havoc!" and let slop the dogs of war

28.02.2026 22:32 👍 88 🔁 12 💬 1 📌 1

write this piece!!

02.03.2026 13:23 👍 2 🔁 0 💬 1 📌 0

not for nothing but the insistence that graham platner is "the new fetterman" feels like a misunderstanding of both platner and fetterman

02.03.2026 00:00 👍 49 🔁 1 💬 3 📌 0

Yes they really sincerely believe they're raising gifted, precocious children. This colors everything from how they think about "personas" (which may or may not exist) to who they hire to what contracts they seek

01.03.2026 15:27 👍 15 🔁 2 💬 0 📌 0

The best thing I’ve read on the Anthropic dispute by far 👇

01.03.2026 14:29 👍 48 🔁 9 💬 0 📌 0

The saint of bright doors!

01.03.2026 13:45 👍 7 🔁 0 💬 2 📌 0
What this does to military A.I. capabilities is beyond the brief of this newsletter, except to say that I think it’s “bad” for Grok, the pedophile mechahitler A.I., to be involved with weapons in really any way. What I am interested in, here, is what this reveals about the state of politics in Silicon Valley.

In a sentence, I think what’s happening is (1) basic (i.e. normal) cutthroat competition between rival firms for government contracts, which is both driving and being driven by (2) an open and ongoing political-ideological dispute between two factions of Silicon Valley capital, which is in turn informing and being informed by (3) an almost religious disagreement about the nature of the god being built on the computer.

What this does to military A.I. capabilities is beyond the brief of this newsletter, except to say that I think it’s “bad” for Grok, the pedophile mechahitler A.I., to be involved with weapons in really any way. What I am interested in, here, is what this reveals about the state of politics in Silicon Valley. In a sentence, I think what’s happening is (1) basic (i.e. normal) cutthroat competition between rival firms for government contracts, which is both driving and being driven by (2) an open and ongoing political-ideological dispute between two factions of Silicon Valley capital, which is in turn informing and being informed by (3) an almost religious disagreement about the nature of the god being built on the computer.

To start, it seems quite obvious that the Tech Right--a bloc of right-wing, Trump-aligned executives, investors, podcasters, Twitter personalities, firms, and companies, among them Palantir’s Joe Lonsdale and Alex Karp, Anduril’s Palmer Luckey, and, of course, xAI’s Elon Musk--with its extensive links to the administration, has been exerting behind-the-scenes pressure on Hegseth and the Pentagon to sever ties with or otherwise punish Anthropic. It was a Palantir executive, after all, who snitched on Anthropic to the D.o.D., and Hegseth’s speech in January about “objectively truthful AI capabilities” was a close echo of Musk’s ramblings about his “maximally truth-seeking” model Grok.

The Tech Right’s contempt for Anthropic is first and foremost financial in nature. Musk, obviously, would like xAI to be first in line for any government contracts. (Indeed, Hegseth announced a deal with xAI this week to use Grok under the Pentagon’s preferred “all lawful use” terms.) And I suspect Palantir, Anthropic client though it may be, has the same existential fear of Claude as McKinsey or Salesforce or any other consultancy or software-as-a-service provider. If Anthropic is aggressively courting the D.o.D. to contract directly, and if Claude is as good as every thinks, what does Palantir’s future as a data-analytics-in-camo platform actually look like?

To start, it seems quite obvious that the Tech Right--a bloc of right-wing, Trump-aligned executives, investors, podcasters, Twitter personalities, firms, and companies, among them Palantir’s Joe Lonsdale and Alex Karp, Anduril’s Palmer Luckey, and, of course, xAI’s Elon Musk--with its extensive links to the administration, has been exerting behind-the-scenes pressure on Hegseth and the Pentagon to sever ties with or otherwise punish Anthropic. It was a Palantir executive, after all, who snitched on Anthropic to the D.o.D., and Hegseth’s speech in January about “objectively truthful AI capabilities” was a close echo of Musk’s ramblings about his “maximally truth-seeking” model Grok. The Tech Right’s contempt for Anthropic is first and foremost financial in nature. Musk, obviously, would like xAI to be first in line for any government contracts. (Indeed, Hegseth announced a deal with xAI this week to use Grok under the Pentagon’s preferred “all lawful use” terms.) And I suspect Palantir, Anthropic client though it may be, has the same existential fear of Claude as McKinsey or Salesforce or any other consultancy or software-as-a-service provider. If Anthropic is aggressively courting the D.o.D. to contract directly, and if Claude is as good as every thinks, what does Palantir’s future as a data-analytics-in-camo platform actually look like?

This doesn’t necessarily separate him from any other Silicon Valley liberal. But I think it’s good to attend to the valence of his liberalism. Amodei, like most of the Anthropic executives and many people in the A.I. in general, has long been associated with the worlds of Bay Area Rationalism and Effective Altruism--wonkily utilitarian philosophical and philanthropic practices focused on self-described rationalist inquiry and self-improvement.

Bay Area Rationalism is a loose and diverse movement, containing a host of political perspectives, but it’s always had a particular concern with moral philosophy as it relates to the expected development of artificial superintelligence. To be a Rationalist liberal democrat (small-L small-D), e.g., might mean orienting your liberal democrat-ness toward its practical applications around the eschatological scenario of hard-takeoff A.G.I.

This doesn’t necessarily separate him from any other Silicon Valley liberal. But I think it’s good to attend to the valence of his liberalism. Amodei, like most of the Anthropic executives and many people in the A.I. in general, has long been associated with the worlds of Bay Area Rationalism and Effective Altruism--wonkily utilitarian philosophical and philanthropic practices focused on self-described rationalist inquiry and self-improvement. Bay Area Rationalism is a loose and diverse movement, containing a host of political perspectives, but it’s always had a particular concern with moral philosophy as it relates to the expected development of artificial superintelligence. To be a Rationalist liberal democrat (small-L small-D), e.g., might mean orienting your liberal democrat-ness toward its practical applications around the eschatological scenario of hard-takeoff A.G.I.

I don’t mean to suggest that Amodei’s commitments to liberal democracy are inauthentic. More that, as far he is concerned the stakes of this commitment go well beyond his own moral or ethical culpability. The decisions he makes now, and his consistent practice to his espoused beliefs, could mean the difference between a benevolent computer god and a wrathful one.

X avatar for @hlntnr		
Helen Toner
@hlntnr
One thing the Pentagon is very likely underestimating: how much Anthropic cares about what *future Claudes* will make of this situation. Because of how Claude is trained, what principles/values/priorities the company demonstrate here could shape its "character" for a long time.
		
Andrew Curran @AndrewCurran_
Update on the meeting; according to Axios Defense Secretary Pete Hegseth gave Dario Amodei until Friday night to give the military unfettered access to Claude or face the consequences, which may even include invoking the Defense Production Act to force the training of a WarClaude
4:26 PM · Feb 25, 2026 · 227K Views
41 Replies · 122 Reposts · 1.96K Likes
And this has placed him, and Anthropic, on a collision course with the Tech Right. Musk, too, believes he is bringing superintelligence into existence at xAI. But for him the urgenct imp

I don’t mean to suggest that Amodei’s commitments to liberal democracy are inauthentic. More that, as far he is concerned the stakes of this commitment go well beyond his own moral or ethical culpability. The decisions he makes now, and his consistent practice to his espoused beliefs, could mean the difference between a benevolent computer god and a wrathful one. X avatar for @hlntnr Helen Toner @hlntnr One thing the Pentagon is very likely underestimating: how much Anthropic cares about what *future Claudes* will make of this situation. Because of how Claude is trained, what principles/values/priorities the company demonstrate here could shape its "character" for a long time. Andrew Curran @AndrewCurran_ Update on the meeting; according to Axios Defense Secretary Pete Hegseth gave Dario Amodei until Friday night to give the military unfettered access to Claude or face the consequences, which may even include invoking the Defense Production Act to force the training of a WarClaude 4:26 PM · Feb 25, 2026 · 227K Views 41 Replies · 122 Reposts · 1.96K Likes And this has placed him, and Anthropic, on a collision course with the Tech Right. Musk, too, believes he is bringing superintelligence into existence at xAI. But for him the urgenct imp

one way of seeing anthropic vs. the pentagon is as a fissure between the two silicon valley tribes most enthusiastic about ai: "rationalists" and "accelerationists"

maxread.substack.com/p/what-anthr...

28.02.2026 00:44 👍 392 🔁 76 💬 14 📌 12

probably the best racist writer of racist horror since hp lovecraft

28.02.2026 19:49 👍 40 🔁 2 💬 2 📌 0

interested to read when it’s out!

28.02.2026 15:49 👍 2 🔁 0 💬 0 📌 0

this is all very on point and it reminds me how many protests were going on and bubbling up around Cambridge MIT Google etc and the Tech Workers Coalition circa 2018/19, project maven. not mentioned here but the misogyny, the Epstein stuff, mit media lab, was also mixed into this around then

28.02.2026 09:24 👍 17 🔁 4 💬 1 📌 1

Dying at the idea that the Pentagon gave Dario Amodei a hypo that was essentially the dril woke sniper tweet

28.02.2026 01:47 👍 279 🔁 49 💬 3 📌 0
This hadn’t gone exactly how they’d like. In December, Bloomberg reported recently, “a senior US defense official posed a hypothetical scenario” to Amodei:

What if a nuclear-armed intercontinental ballistic missile were hurtling towards the US with only 90 seconds to spare, and Anthropic’s AI were the only way to trigger a missile response to save the country, but the company’s safeguards wouldn’t allow it, the senior official mused in a December phone call.

“Call me,” was how Pentagon officials interpreted Amodei’s answer, according to another senior defense official briefed on the discussion, who described being astounded by the billionaire’s response.

LOL. Our beautiful generals have many medals and some even have battlefield experience, but I can say with some confidence that they have never engaged with a Rationalist online and are deeply unprepared for what it means to pick a fight with one.

This hadn’t gone exactly how they’d like. In December, Bloomberg reported recently, “a senior US defense official posed a hypothetical scenario” to Amodei: What if a nuclear-armed intercontinental ballistic missile were hurtling towards the US with only 90 seconds to spare, and Anthropic’s AI were the only way to trigger a missile response to save the country, but the company’s safeguards wouldn’t allow it, the senior official mused in a December phone call. “Call me,” was how Pentagon officials interpreted Amodei’s answer, according to another senior defense official briefed on the discussion, who described being astounded by the billionaire’s response. LOL. Our beautiful generals have many medals and some even have battlefield experience, but I can say with some confidence that they have never engaged with a Rationalist online and are deeply unprepared for what it means to pick a fight with one.

[Dario Amodei Bane voice] “Oh, you think stupid elaborate hypotheticals are your ally. But you merely adopted elaborate and weirdly specific hypothetical scenarios; I was born in them, molded by them. I didn’t see a normal argument until I was already a man, by then it was nothing to me but BLINDING!”

[Dario Amodei Bane voice] “Oh, you think stupid elaborate hypotheticals are your ally. But you merely adopted elaborate and weirdly specific hypothetical scenarios; I was born in them, molded by them. I didn’t see a normal argument until I was already a man, by then it was nothing to me but BLINDING!”

one of the funniest things to keep coming out of the anthropic reporting is that the pentagon was trying to convince amodei by proposing elaborate hypothetical scenarios. buddy do you think an EFFECTIVE ALTRUIST has never considered a bizarrely specific and elaborate hypothetical scenario??

28.02.2026 00:50 👍 1067 🔁 168 💬 13 📌 21
What this does to military A.I. capabilities is beyond the brief of this newsletter, except to say that I think it’s “bad” for Grok, the pedophile mechahitler A.I., to be involved with weapons in really any way. What I am interested in, here, is what this reveals about the state of politics in Silicon Valley.

In a sentence, I think what’s happening is (1) basic (i.e. normal) cutthroat competition between rival firms for government contracts, which is both driving and being driven by (2) an open and ongoing political-ideological dispute between two factions of Silicon Valley capital, which is in turn informing and being informed by (3) an almost religious disagreement about the nature of the god being built on the computer.

What this does to military A.I. capabilities is beyond the brief of this newsletter, except to say that I think it’s “bad” for Grok, the pedophile mechahitler A.I., to be involved with weapons in really any way. What I am interested in, here, is what this reveals about the state of politics in Silicon Valley. In a sentence, I think what’s happening is (1) basic (i.e. normal) cutthroat competition between rival firms for government contracts, which is both driving and being driven by (2) an open and ongoing political-ideological dispute between two factions of Silicon Valley capital, which is in turn informing and being informed by (3) an almost religious disagreement about the nature of the god being built on the computer.

To start, it seems quite obvious that the Tech Right--a bloc of right-wing, Trump-aligned executives, investors, podcasters, Twitter personalities, firms, and companies, among them Palantir’s Joe Lonsdale and Alex Karp, Anduril’s Palmer Luckey, and, of course, xAI’s Elon Musk--with its extensive links to the administration, has been exerting behind-the-scenes pressure on Hegseth and the Pentagon to sever ties with or otherwise punish Anthropic. It was a Palantir executive, after all, who snitched on Anthropic to the D.o.D., and Hegseth’s speech in January about “objectively truthful AI capabilities” was a close echo of Musk’s ramblings about his “maximally truth-seeking” model Grok.

The Tech Right’s contempt for Anthropic is first and foremost financial in nature. Musk, obviously, would like xAI to be first in line for any government contracts. (Indeed, Hegseth announced a deal with xAI this week to use Grok under the Pentagon’s preferred “all lawful use” terms.) And I suspect Palantir, Anthropic client though it may be, has the same existential fear of Claude as McKinsey or Salesforce or any other consultancy or software-as-a-service provider. If Anthropic is aggressively courting the D.o.D. to contract directly, and if Claude is as good as every thinks, what does Palantir’s future as a data-analytics-in-camo platform actually look like?

To start, it seems quite obvious that the Tech Right--a bloc of right-wing, Trump-aligned executives, investors, podcasters, Twitter personalities, firms, and companies, among them Palantir’s Joe Lonsdale and Alex Karp, Anduril’s Palmer Luckey, and, of course, xAI’s Elon Musk--with its extensive links to the administration, has been exerting behind-the-scenes pressure on Hegseth and the Pentagon to sever ties with or otherwise punish Anthropic. It was a Palantir executive, after all, who snitched on Anthropic to the D.o.D., and Hegseth’s speech in January about “objectively truthful AI capabilities” was a close echo of Musk’s ramblings about his “maximally truth-seeking” model Grok. The Tech Right’s contempt for Anthropic is first and foremost financial in nature. Musk, obviously, would like xAI to be first in line for any government contracts. (Indeed, Hegseth announced a deal with xAI this week to use Grok under the Pentagon’s preferred “all lawful use” terms.) And I suspect Palantir, Anthropic client though it may be, has the same existential fear of Claude as McKinsey or Salesforce or any other consultancy or software-as-a-service provider. If Anthropic is aggressively courting the D.o.D. to contract directly, and if Claude is as good as every thinks, what does Palantir’s future as a data-analytics-in-camo platform actually look like?

This doesn’t necessarily separate him from any other Silicon Valley liberal. But I think it’s good to attend to the valence of his liberalism. Amodei, like most of the Anthropic executives and many people in the A.I. in general, has long been associated with the worlds of Bay Area Rationalism and Effective Altruism--wonkily utilitarian philosophical and philanthropic practices focused on self-described rationalist inquiry and self-improvement.

Bay Area Rationalism is a loose and diverse movement, containing a host of political perspectives, but it’s always had a particular concern with moral philosophy as it relates to the expected development of artificial superintelligence. To be a Rationalist liberal democrat (small-L small-D), e.g., might mean orienting your liberal democrat-ness toward its practical applications around the eschatological scenario of hard-takeoff A.G.I.

This doesn’t necessarily separate him from any other Silicon Valley liberal. But I think it’s good to attend to the valence of his liberalism. Amodei, like most of the Anthropic executives and many people in the A.I. in general, has long been associated with the worlds of Bay Area Rationalism and Effective Altruism--wonkily utilitarian philosophical and philanthropic practices focused on self-described rationalist inquiry and self-improvement. Bay Area Rationalism is a loose and diverse movement, containing a host of political perspectives, but it’s always had a particular concern with moral philosophy as it relates to the expected development of artificial superintelligence. To be a Rationalist liberal democrat (small-L small-D), e.g., might mean orienting your liberal democrat-ness toward its practical applications around the eschatological scenario of hard-takeoff A.G.I.

I don’t mean to suggest that Amodei’s commitments to liberal democracy are inauthentic. More that, as far he is concerned the stakes of this commitment go well beyond his own moral or ethical culpability. The decisions he makes now, and his consistent practice to his espoused beliefs, could mean the difference between a benevolent computer god and a wrathful one.

X avatar for @hlntnr		
Helen Toner
@hlntnr
One thing the Pentagon is very likely underestimating: how much Anthropic cares about what *future Claudes* will make of this situation. Because of how Claude is trained, what principles/values/priorities the company demonstrate here could shape its "character" for a long time.
		
Andrew Curran @AndrewCurran_
Update on the meeting; according to Axios Defense Secretary Pete Hegseth gave Dario Amodei until Friday night to give the military unfettered access to Claude or face the consequences, which may even include invoking the Defense Production Act to force the training of a WarClaude
4:26 PM · Feb 25, 2026 · 227K Views
41 Replies · 122 Reposts · 1.96K Likes
And this has placed him, and Anthropic, on a collision course with the Tech Right. Musk, too, believes he is bringing superintelligence into existence at xAI. But for him the urgenct imp

I don’t mean to suggest that Amodei’s commitments to liberal democracy are inauthentic. More that, as far he is concerned the stakes of this commitment go well beyond his own moral or ethical culpability. The decisions he makes now, and his consistent practice to his espoused beliefs, could mean the difference between a benevolent computer god and a wrathful one. X avatar for @hlntnr Helen Toner @hlntnr One thing the Pentagon is very likely underestimating: how much Anthropic cares about what *future Claudes* will make of this situation. Because of how Claude is trained, what principles/values/priorities the company demonstrate here could shape its "character" for a long time. Andrew Curran @AndrewCurran_ Update on the meeting; according to Axios Defense Secretary Pete Hegseth gave Dario Amodei until Friday night to give the military unfettered access to Claude or face the consequences, which may even include invoking the Defense Production Act to force the training of a WarClaude 4:26 PM · Feb 25, 2026 · 227K Views 41 Replies · 122 Reposts · 1.96K Likes And this has placed him, and Anthropic, on a collision course with the Tech Right. Musk, too, believes he is bringing superintelligence into existence at xAI. But for him the urgenct imp

one way of seeing anthropic vs. the pentagon is as a fissure between the two silicon valley tribes most enthusiastic about ai: "rationalists" and "accelerationists"

maxread.substack.com/p/what-anthr...

28.02.2026 00:44 👍 392 🔁 76 💬 14 📌 12
Preview
Victor Wembanyama Sighting Confirmed In Brooklyn | Defector A couple months ago, some friends and I made plans to see the San Antonio Spurs when they were in town to play the Brooklyn Nets. Obviously, we weren’t going for the opportunity to see the Nets’ youth...

Here’s a dumb little blog about the experience of witnessing Wemby in person: defector.com/victor-wemba...

27.02.2026 21:26 👍 28 🔁 4 💬 2 📌 0

nets game is packed and energy is completely dead. only solution: bring back the Brooklyn knight. The people want him back

27.02.2026 01:08 👍 31 🔁 3 💬 2 📌 0
Preview
Parents Music Resource Center - Wikipedia

typo--PMRC en.wikipedia.org/wiki/Parents...

26.02.2026 18:11 👍 1 🔁 0 💬 0 📌 0
Post image

RIP tren-addled crypto kick streamer gold chain zuck 2024-2026

26.02.2026 16:11 👍 78 🔁 6 💬 12 📌 10

still hits

26.02.2026 15:49 👍 21 🔁 1 💬 5 📌 0

king !!

26.02.2026 15:48 👍 2 🔁 0 💬 0 📌 0