The scale of online grooming is accelerating. Reports of online enticement to NCMEC nearly tripled from 2023 to 2024. If your platform supports chat, DMs, or community messaging, learn more about Safer Predictβs AI-driven CSAM and CSE detection.
https://teamthorn.co/3SgXLJL
05.03.2026 16:55
π 0
π 0
π¬ 0
π 0
Child Safety Necessitates New Approaches to AI Safety
15 open research problems in AI child safety, spanning model development, deployment, and maintenance.
Child Safety in AI: Open Problems
AI-generated CSAM is rapidly increasing (>400% since 2024 [IWF]). In collaboration with Thorn, we have identified 15 open research problems across AI development, deployment & maintenance to help address child safety risks.
π aichildsafety.github.io
03.03.2026 19:46
π 3
π 1
π¬ 1
π 0
We partnered with the UK AI Security Institute to publish a safety protocol grounded in the Safety by Design approach. Safety cannot be bolted on after launch. It must be embedded into architecture, policy, and workflows from the start. Download the protocol today.
https://teamthorn.co/3OGbmvf
03.03.2026 18:06
π 0
π 0
π¬ 0
π 0
Did you catch the latest edition of Safe Space Digest? Each month we gather the top headlines in trust & safety news and give a quick summary of the stories impacting online child safety.
Here's the latest from Cassie Coccaro, Communications Lead at Thorn.
https://teamthorn.co/4l3w6cw
27.02.2026 18:10
π 0
π 0
π¬ 0
π 0
Thereβs a lag between when novel CSAM is identified and when its hash is added to broader, widely used databases. SaferList helps close that window.
Each Safer customer can choose to share their SaferList across the entire Safer community to strengthen short-term protection for users.
19.02.2026 17:47
π 0
π 0
π¬ 0
π 0
Detection is only half the battle.
Routing is where trust & safety teams win back time.
Classifiers identify potentially harmful content and create value from what happens next. When prediction scores are paired with intelligent queueing, teams can move faster from identification to intervention.
17.02.2026 17:48
π 0
π 0
π¬ 0
π 0
Work in trust & safety and child protection is high-stakes by nature. When the risks are real, itβs easy to stay heads-down and push through.
Donβt forget to show yourself some love, because protecting children over the long term requires protecting the people doing the work.
12.02.2026 17:11
π 2
π 0
π¬ 0
π 0
On Safer Internet Day, letβs improve online experiences for everyone by ensuring child safety on your platform.
Here are three places to start:
1οΈβ£ Build in safety by design
2οΈβ£ Proactively detect
3οΈβ£ Collaborate with accountability
A safer internet is built collectively, through shared responsibility.
10.02.2026 14:00
π 2
π 0
π¬ 1
π 0
TrustCon is the only global conference dedicated to trust and safety professionals. Weβre looking at data science, research and engineering, product design, and more. You name it, thereβs going to be an expert at TrustCon ready to talk about it.
What topics do you want to learn about this year?
09.02.2026 20:59
π 1
π 0
π¬ 0
π 0
Proactive safety technology relies on two complementary approaches. Hashing and matching power the first layer. Modern ML classifiers are the second layer. Together, these tools create a dual safety system that helps platforms move from reactive enforcement to proactive protection.
05.02.2026 16:53
π 0
π 0
π¬ 0
π 0
Child sexual abuse and exploitation increasingly happens through everyday platform functionality: image uploads, DMs, file sharing, comments, and chat. When safety isnβt designed into those systems from the start, platforms are forced to respond after harm has already occurred.
03.02.2026 17:51
π 1
π 0
π¬ 0
π 0
Did you catch the latest edition of Safe Space Digest? Each month we gather the top headlines in trust & safety news and give a quick summary of the stories impacting online child safety.
Here's the latest from Cassie Coccaro, Communications Lead at Thorn.
https://teamthorn.co/4c4Ob7c
29.01.2026 16:51
π 1
π 0
π¬ 0
π 0
One of the most consequential risk vectors in AI development is the training data itself. A recent investigation reported by @404media.co highlights this risk:
β οΈThe widely used NudeNet dataset included over 120 images of known or suspected CSAM.β οΈ
Read the full story:
https://teamthorn.co/4sDlt3l
15.01.2026 17:15
π 0
π 0
π¬ 0
π 0
We asked 2025 Safe Space podcast guest David Polgar a couple of questions to reflect on the new year. His answers are a testament to the great work trust & safety professionals are doing in the space, and the path they are forging for future professionals.
Listen here:
https://teamthorn.co/3Lj2BWt
13.01.2026 18:09
π 2
π 0
π¬ 0
π 0
Business Case Template:
https://docs.google.com/document/d/1qLOf6MODCpLxoD3Ayp9yLLUaOoO6byy39q2L4lEgke4/copy
Tooling Scorecard Template:
https://docs.google.com/spreadsheets/d/1SiWE0saDLyyps71RDWgqJnJ4uxhLWygqqZyv4raCMyY/copy
06.01.2026 18:15
π 0
π 0
π¬ 0
π 0
Weβre sharing two free templates to help trust & safety teams accelerate their 2026 planning:
π§ A Trust & Safety Business Case Template β communicate risk, resourcing needs, and ROI to leadership
π§° A Tooling Scorecard Template β evaluate detection, triage, and reporting solutions with clarity
06.01.2026 18:14
π 1
π 0
π¬ 0
π 0
Did you catch the latest edition of Digital Defenders Digest? Each month we gather the top headlines in trust & safety news and give a quick summary of the stories impacting online child safety.
Here's the latest from Cassie Coccaro, Communications Lead at Thorn.
https://teamthorn.co/4ji75JI
30.12.2025 22:54
π 0
π 0
π¬ 0
π 0
Seema embodies the spirit of the Safer team, leading with kindness, compassion, and a deep desire to do good. Sheβs helping build technology that creates digital spaces where safety comes first. The online world kids are growing up in needs people like Seema to help power trust and safety.
29.12.2025 19:58
π 0
π 0
π¬ 0
π 0
Emily and her team are proving whatβs possible when technology is used for good. From identifying previously unreported CSAM to detecting online grooming early, every advancement helps platforms deliver safer experiences and protect the most vulnerable users.
22.12.2025 19:56
π 0
π 0
π¬ 0
π 0
At its core, T&S work is grounded in care, compassion, and a global commitment to user safety.
John Buckley, Director and Head of Child Rights and Safety at The LEGO Group, joined Safe Space to discuss what it takes to advocate for children inside some of the worldβs largest tech companies.
18.12.2025 17:13
π 0
π 0
π¬ 0
π 0
In trust and safety, the job can feel *very* personalβand that makes switching off tricky.
In this clip from Safe Space, @aaron.bsky.team (Head of Trust & Safety at @bsky.app) shares how he protects his well-being while working in one of the internetβs most emotionally demanding roles.
15.12.2025 18:17
π 0
π 0
π¬ 0
π 0
11.12.2025 17:33
π 0
π 0
π¬ 0
π 0
As Dr. Rebecca Portnoff reminds us, a modelβs effectiveness depends on everything around it: the humans, the processes, and the safeguards that hold it accountable.
π§ Hear Dr. Portnoff expand on this topic in our Humans in the Loop webinar: https://teamthorn.co/3WXqFQZ
08.12.2025 19:59
π 1
π 0
π¬ 0
π 0
You can design for an ideal community. But you have to moderate the real one. That means moderators need to learn from the community that exists, not necessarily the one they planned for.
π§ Learn more in our webinar Humans in the Loop: Building Ethical AI for Content Moderation teamthorn.co/3WXqFQZ
03.12.2025 22:12
π 2
π 3
π¬ 0
π 0
Not every moderation decision needs a human, but every system still needs human judgment.
As Dave Willner puts it, the real question is at what level can people add the most value?
π§ Hear more from Dave in our Humans in the Loop webinar: https://teamthorn.co/3WXqFQZ
26.11.2025 18:10
π 1
π 0
π¬ 0
π 0
Safe Space, a Trust & Safety Podcast: Lauren Jonas | EP 9
YouTube video by Safer by Thorn
When it comes to designing technology for teens, thereβs no one-size-fits-all solution.
In our latest Safe Space episode, Lauren Jonas, Head of Youth Wellbeing at OpenAI, shares how her team approached new parental controls for ChatGPT.
https://youtu.be/s9QMo5LJS1Y
18.11.2025 16:49
π 0
π 0
π¬ 0
π 0
βHumans vs. AIβ is the wrong question.
Itβs about designing balance in your system, where humans bring context, and AI brings scale. How are you building checks and balances into your content moderation process today?
https://teamthorn.co/3WXqFQZ
17.11.2025 14:51
π 0
π 0
π¬ 0
π 0