Damon Beres's Avatar

Damon Beres

@damonberes.com

Senior editor at The Atlantic, focused on tech // Writing EASY MODE, a book exploring technology, humanity, and the importance of friction, for Scribner SIGNAL: Damon.63 📨 dberes at the atlantic dot com ✴️ damonberes.com

30,207
Followers
614
Following
1,336
Posts
04.05.2023
Joined
Posts Following

Latest posts by Damon Beres @damonberes.com

Did Netflix Ruin Movies?
Did Netflix Ruin Movies? YouTube video by The Atlantic

Really compelling new episode of Galaxy brain about what Netflix did to da movies, with @cwarzel.bsky.social and @davidlsims.bsky.social

06.03.2026 16:36 👍 4 🔁 0 💬 0 📌 0

oh my

06.03.2026 15:01 👍 8 🔁 2 💬 0 📌 0

Definitely one of Spielberg’s best

06.03.2026 12:35 👍 0 🔁 0 💬 0 📌 0

This is just exceptional.

06.03.2026 02:31 👍 20 🔁 3 💬 2 📌 0
Preview
Trump Says 'I Guess' Americans Should Worry About Iran Retaliating on U.S. Soil: 'Like I Said, Some People Will Die' In an interview with 'Time,' President Donald Trump acknowledged the possibility that Iran retaliates with attacks on U.S. soil, saying, 'We think about it all the time. We plan for it'

🫡

06.03.2026 00:20 👍 9 🔁 4 💬 4 📌 0
Preview
The Central Lie of Prediction Markets Polymarket and Kalshi promise the wisdom of the crowds. They deliver something very different.

Great new story by @cwarzel.bsky.social detailing the very big problems with prediction markets—"the perfect technology for a low-trust society, simultaneously exploiting and reifying an environment in which believing the motives behind any person or action becomes harder."

05.03.2026 20:08 👍 8 🔁 5 💬 0 📌 0
Preview
The Atlantic Announces Sarah A. Topol and Jenisha Watts as Staff Writers None

We’re excited to announce two journalists who are becoming staff writers at The Atlantic: @satopol.bsky.social and @jenishawo.bsky.social. More here: www.theatlantic.com/press-releas...

05.03.2026 15:32 👍 8 🔁 2 💬 0 📌 0

Source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected about Americans (Ross Andersen/The Atlantic)

Main Link | Techmeme Permalink

01.03.2026 16:35 👍 625 🔁 249 💬 15 📌 50
Preview
‘I Have Agreed to Talk’ Trump tells The Atlantic that Iranian leaders want to resume negotiations.

“They should have done it sooner, Michael. They could have made a deal. They should’ve done it sooner. They played too cute.”

01.03.2026 17:03 👍 6 🔁 2 💬 1 📌 0
Post image

NEW: The reason that Anthropic wasn’t okay with simply confining its models to the cloud

01.03.2026 15:25 👍 21 🔁 7 💬 1 📌 1
Preview
Inside Anthropic’s Killer-Robot Dispute With the Pentagon New details on precisely where the lines were drawn

New details on the dispute between the Pentagon and Anthropic; how the negotiations broke down, and a particular sticking point on AI in the cloud vs inside of edge systems. by @rossandersen.bsky.social / tip @techmeme.com

01.03.2026 15:22 👍 12 🔁 5 💬 1 📌 3
Preview
Does Congress Even Exist Anymore? The fast fade of a co-equal branch of government

Great question

08.01.2026 01:53 👍 41 🔁 12 💬 0 📌 1
Preview
Donald Trump Declares War on Anthropic Their fight will shake the entire tech industry.

My expert opinion on this is: Weird situation

28.02.2026 02:34 👍 13 🔁 2 💬 0 📌 0

You really need to hear the audio in this article, it's unbelievable

27.02.2026 19:56 👍 59 🔁 29 💬 5 📌 2
Post image

In happier news, get a load of these seals www.theatlantic.com/photography/...

27.02.2026 17:54 👍 9 🔁 1 💬 0 📌 0
Preview
Meta Says It Cares About Kids. New Documents Tell a Different Story. For years, employees acknowledged a problem with potential child groomers, but prioritized growth over fixes.

tip @techmeme.com

27.02.2026 17:51 👍 4 🔁 1 💬 0 📌 0
As far back as 2019, employees had specifically run a test on Instagram to understand how experimental accounts engaging in what the company termed “groomer-esque behavior”—following “teen hashtags” or “sexy teen accounts”—would come across minors. At issue was the app’s recommendation algorithm, which connects soccer fans to pictures of Lionel Messi, for example, or aspiring travel influencers to videos about little-known cafés in Greece. It also seemed to funnel children to potentially dangerous adults with whom they wouldn’t otherwise be connected. “We are recommending nearly 4X as many minors to groomers (nearly 2 million minors in the last 3 months),” the report read, according to an internal document we viewed. According to this test, 27 percent of the recommendations shown to these “groomer-esque” accounts belonged to minors, compared with 7 percent of the accounts recommended to everyday adults. The report continues: “22% of those recommendations resulted in a follow request”—meaning that potential groomers attempted to interact with these minors nearly a quarter of the time. Even so, Instagram waited years before locking down accounts belonging to its youngest users.

As far back as 2019, employees had specifically run a test on Instagram to understand how experimental accounts engaging in what the company termed “groomer-esque behavior”—following “teen hashtags” or “sexy teen accounts”—would come across minors. At issue was the app’s recommendation algorithm, which connects soccer fans to pictures of Lionel Messi, for example, or aspiring travel influencers to videos about little-known cafés in Greece. It also seemed to funnel children to potentially dangerous adults with whom they wouldn’t otherwise be connected. “We are recommending nearly 4X as many minors to groomers (nearly 2 million minors in the last 3 months),” the report read, according to an internal document we viewed. According to this test, 27 percent of the recommendations shown to these “groomer-esque” accounts belonged to minors, compared with 7 percent of the accounts recommended to everyday adults. The report continues: “22% of those recommendations resulted in a follow request”—meaning that potential groomers attempted to interact with these minors nearly a quarter of the time. Even so, Instagram waited years before locking down accounts belonging to its youngest users.

Meta knew what the risk was. In 2019, an internal analysis showed that 27% of the account recommendations made to people engaging in "groomer-esque" behavior belonged to minors—"We are recommending nearly 4X as many minors to groomers (nearly 2 million minors in the last 3 months)" an employee wrote

27.02.2026 17:51 👍 3 🔁 1 💬 1 📌 1

(Yes: one point nine percent.)

27.02.2026 17:46 👍 6 🔁 0 💬 1 📌 0
In August 2020—a year after the research on “groomer-esque” accounts but a year before Zuckerberg’s post about being a concerned father—Meta’s Growth Graph team created a slideshow to explore the question of whether teen Instagram accounts should be set to private by default. This would shield teens from unwanted attention by limiting the ability of people who do not know them to see their content or their profiles, or to contact them. As the Growth Graph team explained in the document, the move would “help prevent high severity actions such as child grooming and inappropriate contact with minors.” (Though the presentation referred to minors generally, Stone told us that at the time, Meta was particularly focused on users under the age of 16.)

In August 2020—a year after the research on “groomer-esque” accounts but a year before Zuckerberg’s post about being a concerned father—Meta’s Growth Graph team created a slideshow to explore the question of whether teen Instagram accounts should be set to private by default. This would shield teens from unwanted attention by limiting the ability of people who do not know them to see their content or their profiles, or to contact them. As the Growth Graph team explained in the document, the move would “help prevent high severity actions such as child grooming and inappropriate contact with minors.” (Though the presentation referred to minors generally, Stone told us that at the time, Meta was particularly focused on users under the age of 16.)

The company’s legal, public-affairs, policy, and well-being teams all supported the change, as did teen users and their parents, the document asserted. “Parents are worried about the security and privacy of information and who can contact them/their teens,” the document stated. “Most teens prefer private accounts and wish to see privacy controls during onboarding.”

The company’s legal, public-affairs, policy, and well-being teams all supported the change, as did teen users and their parents, the document asserted. “Parents are worried about the security and privacy of information and who can contact them/their teens,” the document stated. “Most teens prefer private accounts and wish to see privacy controls during onboarding.”

But internal tests showed that setting these accounts to private by default would lead to “serious growth and engagement decreases,” the document continued. Taking dramatic action to protect teens would mean fewer new teens signing up, existing teens using the platform less, and an overall drop in activity that the employees who created the presentation expected would compound over time. They presented an analysis that showed that overall time spent by teenagers would drop by 1.9 percent by the end of a five-year window. The growth team opposed the change, according to the presentation, which describes its position as “Don’t Launch (Now).”

But internal tests showed that setting these accounts to private by default would lead to “serious growth and engagement decreases,” the document continued. Taking dramatic action to protect teens would mean fewer new teens signing up, existing teens using the platform less, and an overall drop in activity that the employees who created the presentation expected would compound over time. They presented an analysis that showed that overall time spent by teenagers would drop by 1.9 percent by the end of a five-year window. The growth team opposed the change, according to the presentation, which describes its position as “Don’t Launch (Now).”

What might this growth hit have been in material terms? One company analysis showed that teens might spend 1.9% less time on the platform by the end of a five-year period if their accounts were made private—and thereby shielded from predators—by default. The growth team advised against the change.

27.02.2026 17:45 👍 4 🔁 1 💬 1 📌 2
Company spokespeople were clearly aware of the broader teen-safety problem. Just four weeks after Zuckerberg had posted about being “good for kids,” two public-affairs specialists discussed an Instagram update that had just rolled out. That update made new accounts belonging to 13-, 14-, and 15-year-olds “private” by default, yet even this modest move had been flagged by insiders as a business risk for nearly two years before the change was made. Liza Crenshaw messaged her colleague Sophie Vogel that the move had been “contentious”—Instagram CEO Adam Mosseri, a deputy to Zuckerberg, was concerned that it would cause a “huge growth hit,” Crenshaw wrote, according to documents we reviewed.

“We will never get out of this mess if he/we’re not just prepared to ERR ON THE SIDE OF SAFETY,” Vogel wrote. “Would he want any tom dick or harry being able to see all his kids’ content, follow them etc? Is he fucking nuts?”

Company spokespeople were clearly aware of the broader teen-safety problem. Just four weeks after Zuckerberg had posted about being “good for kids,” two public-affairs specialists discussed an Instagram update that had just rolled out. That update made new accounts belonging to 13-, 14-, and 15-year-olds “private” by default, yet even this modest move had been flagged by insiders as a business risk for nearly two years before the change was made. Liza Crenshaw messaged her colleague Sophie Vogel that the move had been “contentious”—Instagram CEO Adam Mosseri, a deputy to Zuckerberg, was concerned that it would cause a “huge growth hit,” Crenshaw wrote, according to documents we reviewed. “We will never get out of this mess if he/we’re not just prepared to ERR ON THE SIDE OF SAFETY,” Vogel wrote. “Would he want any tom dick or harry being able to see all his kids’ content, follow them etc? Is he fucking nuts?”

In a different situation, internal chats between company spokespeople showed disbelief at how Instagram CEO Adam Mosseri worried about a potential "growth hit" that would come from making accounts belonging to minors private by default. "Is he fucking nuts?"

27.02.2026 17:42 👍 5 🔁 3 💬 1 📌 1
Not so long ago, Mark Zuckerberg was working in overdrive to convince the world that his company was doing everything it could to protect children. In 2021, he posted a note to his personal Facebook page, writing that he had “spent a lot of time reflecting” on the types of experiences he would want his daughters, then 4 and 5 years old, to have online. “It’s very important to me that everything we build is safe and good for kids,” he wrote, emphasizing that the company absolutely does not “prioritize profit over safety and well-being.”

Not so long ago, Mark Zuckerberg was working in overdrive to convince the world that his company was doing everything it could to protect children. In 2021, he posted a note to his personal Facebook page, writing that he had “spent a lot of time reflecting” on the types of experiences he would want his daughters, then 4 and 5 years old, to have online. “It’s very important to me that everything we build is safe and good for kids,” he wrote, emphasizing that the company absolutely does not “prioritize profit over safety and well-being.”

But documents recently viewed by The Atlantic show that behind the scenes, the company now known as Meta was divided on whether protecting kids should take precedence over user growth and engagement. For years, the company only incrementally rolled out restrictive safety features, even as its own staff detailed the risks its platforms posed to children. Take, for example, a technical problem that affected the company’s systems in November 2020. This issue limited Meta’s ability to track bad actors, at a time when there were, according to an internal chat, “thousands of minors” reporting what the company refers to as tier-one “Inappropriate Interactions with Children,” or “IIC T1”—the “most severe” outcomes possible, such as meeting for sex in real life, suicide, extortion, sadism, and sex trafficking.

But documents recently viewed by The Atlantic show that behind the scenes, the company now known as Meta was divided on whether protecting kids should take precedence over user growth and engagement. For years, the company only incrementally rolled out restrictive safety features, even as its own staff detailed the risks its platforms posed to children. Take, for example, a technical problem that affected the company’s systems in November 2020. This issue limited Meta’s ability to track bad actors, at a time when there were, according to an internal chat, “thousands of minors” reporting what the company refers to as tier-one “Inappropriate Interactions with Children,” or “IIC T1”—the “most severe” outcomes possible, such as meeting for sex in real life, suicide, extortion, sadism, and sex trafficking.

“Even though we know that there is IIC T1 going on (more than 50% of which is sextortion which can lead to suicide) we haven’t done anything. we had a broken escalation path and no measurements,” one employee wrote in the internal chat about the problem. “God knows what happened to those kids.” The company fixed the technical failure within weeks, another document shows, but it would take several more years to adopt other suggested measures to tackle broader issues that allowed predators to find underage targets on Instagram, which Meta owns.

“Even though we know that there is IIC T1 going on (more than 50% of which is sextortion which can lead to suicide) we haven’t done anything. we had a broken escalation path and no measurements,” one employee wrote in the internal chat about the problem. “God knows what happened to those kids.” The company fixed the technical failure within weeks, another document shows, but it would take several more years to adopt other suggested measures to tackle broader issues that allowed predators to find underage targets on Instagram, which Meta owns.

In November 2020, certain safeguards that were in place at the time temporarily failed, leading to "thousands of minors" reporting severe interactions, which could include "extortion, sadism, and sex trafficking"

"We haven’t done anything" one employee wrote. "God knows what happened to those kids"

27.02.2026 17:38 👍 3 🔁 3 💬 1 📌 1
Preview
Meta Says It Cares About Kids. New Documents Tell a Different Story. For years, employees acknowledged a problem with potential child groomers, but prioritized growth over fixes.

NEW: Documents viewed by @michaelscherer.bsky.social and @kait.bsky.social give a candid look at how Meta approaches the issue of child safety. For years, it dragged its feet on features that would help prevent groomers from targeting kids, explicitly prioritizing growth and engagement instead.

27.02.2026 17:30 👍 63 🔁 38 💬 2 📌 4

yes

25.02.2026 16:58 👍 2 🔁 0 💬 0 📌 0
Pew Research Center survey data: One-in-five teens living in households making less than $30,000 a year say they do all or most of their schoolwork with AI chatbots’ help.

A similar share of those in households making $30,000 to just under $75,000 annually say this. Fewer teens living in higher-earning households (7%) say the same.

Pew Research Center survey data: One-in-five teens living in households making less than $30,000 a year say they do all or most of their schoolwork with AI chatbots’ help. A similar share of those in households making $30,000 to just under $75,000 annually say this. Fewer teens living in higher-earning households (7%) say the same.

The continuance of a pattern we've seen many times before www.pewresearch.org/internet/202...

25.02.2026 15:45 👍 10 🔁 2 💬 1 📌 2

eyy!

24.02.2026 17:14 👍 0 🔁 0 💬 1 📌 0
Preview
Sam Altman Is Losing His Grip on Humanity You don’t “train a human”

Did you "train a human" today?

www.theatlantic.com/technology/2...

23.02.2026 23:57 👍 22 🔁 4 💬 2 📌 4
Preview
Sam Altman Unfiltered: ChatGPT, AI Risks & What’s Coming Next, 40 Questions in 60 Minutes YouTube video by The Indian Express
23.02.2026 23:58 👍 1 🔁 0 💬 0 📌 0
Last Friday, on stage at a major AI summit in India, Sam Altman wanted to address what he called an “unfair” criticism. The OpenAI CEO was asked by a reporter from The Indian Express about the natural resources required to train and run generative-AI models. Altman immediately pushed back. Chatbots do require a lot of power, yes, but have you thought about all of the resources demanded by human beings across our evolutionary history?

“It also takes a lot of energy to train a human,” Altman told a packed pavilion. “It takes like 20 years of life and all of the food you eat during that time before you get smart. And not only that, it took, like, the very widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to, like, figure out science and whatever, to produce you, and then you took whatever, you know, you took.”

Last Friday, on stage at a major AI summit in India, Sam Altman wanted to address what he called an “unfair” criticism. The OpenAI CEO was asked by a reporter from The Indian Express about the natural resources required to train and run generative-AI models. Altman immediately pushed back. Chatbots do require a lot of power, yes, but have you thought about all of the resources demanded by human beings across our evolutionary history? “It also takes a lot of energy to train a human,” Altman told a packed pavilion. “It takes like 20 years of life and all of the food you eat during that time before you get smart. And not only that, it took, like, the very widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to, like, figure out science and whatever, to produce you, and then you took whatever, you know, you took.”

<huddled in my bed, eyes clenched shut, whispering to myself> "It takes like 20 years of life and all of the food you eat during that time before you get smart"

23.02.2026 23:58 👍 3 🔁 0 💬 2 📌 0
Preview
Sam Altman Is Losing His Grip on Humanity You don’t “train a human”

I sincerely thought someone was making a joke when I saw a post about Sam Altman referring to the energy required to "train a human," but no, he really is that guy

23.02.2026 23:56 👍 33 🔁 9 💬 6 📌 1

thank you

23.02.2026 03:36 👍 3 🔁 0 💬 1 📌 0