Really compelling new episode of Galaxy brain about what Netflix did to da movies, with @cwarzel.bsky.social and @davidlsims.bsky.social
@damonberes.com
Senior editor at The Atlantic, focused on tech // Writing EASY MODE, a book exploring technology, humanity, and the importance of friction, for Scribner SIGNAL: Damon.63 📨 dberes at the atlantic dot com ✴️ damonberes.com
Really compelling new episode of Galaxy brain about what Netflix did to da movies, with @cwarzel.bsky.social and @davidlsims.bsky.social
oh my
Definitely one of Spielberg’s best
This is just exceptional.
Great new story by @cwarzel.bsky.social detailing the very big problems with prediction markets—"the perfect technology for a low-trust society, simultaneously exploiting and reifying an environment in which believing the motives behind any person or action becomes harder."
We’re excited to announce two journalists who are becoming staff writers at The Atlantic: @satopol.bsky.social and @jenishawo.bsky.social. More here: www.theatlantic.com/press-releas...
Source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected about Americans (Ross Andersen/The Atlantic)
Main Link | Techmeme Permalink
“They should have done it sooner, Michael. They could have made a deal. They should’ve done it sooner. They played too cute.”
NEW: The reason that Anthropic wasn’t okay with simply confining its models to the cloud
New details on the dispute between the Pentagon and Anthropic; how the negotiations broke down, and a particular sticking point on AI in the cloud vs inside of edge systems. by @rossandersen.bsky.social / tip @techmeme.com
You really need to hear the audio in this article, it's unbelievable
In happier news, get a load of these seals www.theatlantic.com/photography/...
As far back as 2019, employees had specifically run a test on Instagram to understand how experimental accounts engaging in what the company termed “groomer-esque behavior”—following “teen hashtags” or “sexy teen accounts”—would come across minors. At issue was the app’s recommendation algorithm, which connects soccer fans to pictures of Lionel Messi, for example, or aspiring travel influencers to videos about little-known cafés in Greece. It also seemed to funnel children to potentially dangerous adults with whom they wouldn’t otherwise be connected. “We are recommending nearly 4X as many minors to groomers (nearly 2 million minors in the last 3 months),” the report read, according to an internal document we viewed. According to this test, 27 percent of the recommendations shown to these “groomer-esque” accounts belonged to minors, compared with 7 percent of the accounts recommended to everyday adults. The report continues: “22% of those recommendations resulted in a follow request”—meaning that potential groomers attempted to interact with these minors nearly a quarter of the time. Even so, Instagram waited years before locking down accounts belonging to its youngest users.
Meta knew what the risk was. In 2019, an internal analysis showed that 27% of the account recommendations made to people engaging in "groomer-esque" behavior belonged to minors—"We are recommending nearly 4X as many minors to groomers (nearly 2 million minors in the last 3 months)" an employee wrote
(Yes: one point nine percent.)
In August 2020—a year after the research on “groomer-esque” accounts but a year before Zuckerberg’s post about being a concerned father—Meta’s Growth Graph team created a slideshow to explore the question of whether teen Instagram accounts should be set to private by default. This would shield teens from unwanted attention by limiting the ability of people who do not know them to see their content or their profiles, or to contact them. As the Growth Graph team explained in the document, the move would “help prevent high severity actions such as child grooming and inappropriate contact with minors.” (Though the presentation referred to minors generally, Stone told us that at the time, Meta was particularly focused on users under the age of 16.)
The company’s legal, public-affairs, policy, and well-being teams all supported the change, as did teen users and their parents, the document asserted. “Parents are worried about the security and privacy of information and who can contact them/their teens,” the document stated. “Most teens prefer private accounts and wish to see privacy controls during onboarding.”
But internal tests showed that setting these accounts to private by default would lead to “serious growth and engagement decreases,” the document continued. Taking dramatic action to protect teens would mean fewer new teens signing up, existing teens using the platform less, and an overall drop in activity that the employees who created the presentation expected would compound over time. They presented an analysis that showed that overall time spent by teenagers would drop by 1.9 percent by the end of a five-year window. The growth team opposed the change, according to the presentation, which describes its position as “Don’t Launch (Now).”
What might this growth hit have been in material terms? One company analysis showed that teens might spend 1.9% less time on the platform by the end of a five-year period if their accounts were made private—and thereby shielded from predators—by default. The growth team advised against the change.
Company spokespeople were clearly aware of the broader teen-safety problem. Just four weeks after Zuckerberg had posted about being “good for kids,” two public-affairs specialists discussed an Instagram update that had just rolled out. That update made new accounts belonging to 13-, 14-, and 15-year-olds “private” by default, yet even this modest move had been flagged by insiders as a business risk for nearly two years before the change was made. Liza Crenshaw messaged her colleague Sophie Vogel that the move had been “contentious”—Instagram CEO Adam Mosseri, a deputy to Zuckerberg, was concerned that it would cause a “huge growth hit,” Crenshaw wrote, according to documents we reviewed. “We will never get out of this mess if he/we’re not just prepared to ERR ON THE SIDE OF SAFETY,” Vogel wrote. “Would he want any tom dick or harry being able to see all his kids’ content, follow them etc? Is he fucking nuts?”
In a different situation, internal chats between company spokespeople showed disbelief at how Instagram CEO Adam Mosseri worried about a potential "growth hit" that would come from making accounts belonging to minors private by default. "Is he fucking nuts?"
Not so long ago, Mark Zuckerberg was working in overdrive to convince the world that his company was doing everything it could to protect children. In 2021, he posted a note to his personal Facebook page, writing that he had “spent a lot of time reflecting” on the types of experiences he would want his daughters, then 4 and 5 years old, to have online. “It’s very important to me that everything we build is safe and good for kids,” he wrote, emphasizing that the company absolutely does not “prioritize profit over safety and well-being.”
But documents recently viewed by The Atlantic show that behind the scenes, the company now known as Meta was divided on whether protecting kids should take precedence over user growth and engagement. For years, the company only incrementally rolled out restrictive safety features, even as its own staff detailed the risks its platforms posed to children. Take, for example, a technical problem that affected the company’s systems in November 2020. This issue limited Meta’s ability to track bad actors, at a time when there were, according to an internal chat, “thousands of minors” reporting what the company refers to as tier-one “Inappropriate Interactions with Children,” or “IIC T1”—the “most severe” outcomes possible, such as meeting for sex in real life, suicide, extortion, sadism, and sex trafficking.
“Even though we know that there is IIC T1 going on (more than 50% of which is sextortion which can lead to suicide) we haven’t done anything. we had a broken escalation path and no measurements,” one employee wrote in the internal chat about the problem. “God knows what happened to those kids.” The company fixed the technical failure within weeks, another document shows, but it would take several more years to adopt other suggested measures to tackle broader issues that allowed predators to find underage targets on Instagram, which Meta owns.
In November 2020, certain safeguards that were in place at the time temporarily failed, leading to "thousands of minors" reporting severe interactions, which could include "extortion, sadism, and sex trafficking"
"We haven’t done anything" one employee wrote. "God knows what happened to those kids"
NEW: Documents viewed by @michaelscherer.bsky.social and @kait.bsky.social give a candid look at how Meta approaches the issue of child safety. For years, it dragged its feet on features that would help prevent groomers from targeting kids, explicitly prioritizing growth and engagement instead.
yes
Pew Research Center survey data: One-in-five teens living in households making less than $30,000 a year say they do all or most of their schoolwork with AI chatbots’ help. A similar share of those in households making $30,000 to just under $75,000 annually say this. Fewer teens living in higher-earning households (7%) say the same.
The continuance of a pattern we've seen many times before www.pewresearch.org/internet/202...
eyy!
Did you "train a human" today?
www.theatlantic.com/technology/2...
Last Friday, on stage at a major AI summit in India, Sam Altman wanted to address what he called an “unfair” criticism. The OpenAI CEO was asked by a reporter from The Indian Express about the natural resources required to train and run generative-AI models. Altman immediately pushed back. Chatbots do require a lot of power, yes, but have you thought about all of the resources demanded by human beings across our evolutionary history? “It also takes a lot of energy to train a human,” Altman told a packed pavilion. “It takes like 20 years of life and all of the food you eat during that time before you get smart. And not only that, it took, like, the very widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to, like, figure out science and whatever, to produce you, and then you took whatever, you know, you took.”
<huddled in my bed, eyes clenched shut, whispering to myself> "It takes like 20 years of life and all of the food you eat during that time before you get smart"
I sincerely thought someone was making a joke when I saw a post about Sam Altman referring to the energy required to "train a human," but no, he really is that guy
thank you