Safer, Built by Thorn's Avatar

Safer, Built by Thorn

@safer.io

With Safer, Thorn is equipping content-hosting platforms with industry-leading tools for proactive detection of child sexual abuse material #csam and exploitation. Visit safer.io

163
Followers
153
Following
101
Posts
26.11.2024
Joined
Posts Following

Latest posts by Safer, Built by Thorn @safer.io

Post image

The scale of online grooming is accelerating. Reports of online enticement to NCMEC nearly tripled from 2023 to 2024. If your platform supports chat, DMs, or community messaging, learn more about Safer Predict’s AI-driven CSAM and CSE detection.

https://teamthorn.co/3SgXLJL

05.03.2026 16:55 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Child Safety Necessitates New Approaches to AI Safety 15 open research problems in AI child safety, spanning model development, deployment, and maintenance.

Child Safety in AI: Open Problems

AI-generated CSAM is rapidly increasing (>400% since 2024 [IWF]). In collaboration with Thorn, we have identified 15 open research problems across AI development, deployment & maintenance to help address child safety risks.

πŸ”— aichildsafety.github.io

03.03.2026 19:46 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Post image

We partnered with the UK AI Security Institute to publish a safety protocol grounded in the Safety by Design approach. Safety cannot be bolted on after launch. It must be embedded into architecture, policy, and workflows from the start. Download the protocol today.

https://teamthorn.co/3OGbmvf

03.03.2026 18:06 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

Did you catch the latest edition of Safe Space Digest? Each month we gather the top headlines in trust & safety news and give a quick summary of the stories impacting online child safety.

Here's the latest from Cassie Coccaro, Communications Lead at Thorn.

https://teamthorn.co/4l3w6cw

27.02.2026 18:10 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

There’s a lag between when novel CSAM is identified and when its hash is added to broader, widely used databases. SaferList helps close that window.

Each Safer customer can choose to share their SaferList across the entire Safer community to strengthen short-term protection for users.

19.02.2026 17:47 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Detection is only half the battle.
Routing is where trust & safety teams win back time.

Classifiers identify potentially harmful content and create value from what happens next. When prediction scores are paired with intelligent queueing, teams can move faster from identification to intervention.

17.02.2026 17:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

Work in trust & safety and child protection is high-stakes by nature. When the risks are real, it’s easy to stay heads-down and push through.

Don’t forget to show yourself some love, because protecting children over the long term requires protecting the people doing the work.

12.02.2026 17:11 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

On Safer Internet Day, let’s improve online experiences for everyone by ensuring child safety on your platform.

Here are three places to start:
1️⃣ Build in safety by design
2️⃣ Proactively detect
3️⃣ Collaborate with accountability

A safer internet is built collectively, through shared responsibility.

10.02.2026 14:00 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

TrustCon is the only global conference dedicated to trust and safety professionals. We’re looking at data science, research and engineering, product design, and more. You name it, there’s going to be an expert at TrustCon ready to talk about it.

What topics do you want to learn about this year?

09.02.2026 20:59 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Proactive safety technology relies on two complementary approaches. Hashing and matching power the first layer. Modern ML classifiers are the second layer. Together, these tools create a dual safety system that helps platforms move from reactive enforcement to proactive protection.

05.02.2026 16:53 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image Post image

Child sexual abuse and exploitation increasingly happens through everyday platform functionality: image uploads, DMs, file sharing, comments, and chat. When safety isn’t designed into those systems from the start, platforms are forced to respond after harm has already occurred.

03.02.2026 17:51 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

Did you catch the latest edition of Safe Space Digest? Each month we gather the top headlines in trust & safety news and give a quick summary of the stories impacting online child safety.

Here's the latest from Cassie Coccaro, Communications Lead at Thorn.

https://teamthorn.co/4c4Ob7c

29.01.2026 16:51 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

One of the most consequential risk vectors in AI development is the training data itself. A recent investigation reported by @404media.co highlights this risk:

⚠️The widely used NudeNet dataset included over 120 images of known or suspected CSAM.⚠️

Read the full story:
https://teamthorn.co/4sDlt3l

15.01.2026 17:15 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image Post image

We asked 2025 Safe Space podcast guest David Polgar a couple of questions to reflect on the new year. His answers are a testament to the great work trust & safety professionals are doing in the space, and the path they are forging for future professionals.

Listen here:
https://teamthorn.co/3Lj2BWt

13.01.2026 18:09 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image Post image Post image Post image

Birthdays are about taking stock of how far we’ve come and what still needs building.

To every trust & safety team using Safer: thank you. Your commitment to proactive detection and responsible innovation is driving tangible safety outcomes for millions of young people.

08.01.2026 17:12 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Business Case Template:
https://docs.google.com/document/d/1qLOf6MODCpLxoD3Ayp9yLLUaOoO6byy39q2L4lEgke4/copy

Tooling Scorecard Template:
https://docs.google.com/spreadsheets/d/1SiWE0saDLyyps71RDWgqJnJ4uxhLWygqqZyv4raCMyY/copy

06.01.2026 18:15 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

We’re sharing two free templates to help trust & safety teams accelerate their 2026 planning:

πŸ”§ A Trust & Safety Business Case Template β€” communicate risk, resourcing needs, and ROI to leadership

🧰 A Tooling Scorecard Template β€” evaluate detection, triage, and reporting solutions with clarity

06.01.2026 18:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

Did you catch the latest edition of Digital Defenders Digest? Each month we gather the top headlines in trust & safety news and give a quick summary of the stories impacting online child safety.

Here's the latest from Cassie Coccaro, Communications Lead at Thorn.

https://teamthorn.co/4ji75JI

30.12.2025 22:54 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

Seema embodies the spirit of the Safer team, leading with kindness, compassion, and a deep desire to do good. She’s helping build technology that creates digital spaces where safety comes first. The online world kids are growing up in needs people like Seema to help power trust and safety.

29.12.2025 19:58 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

Emily and her team are proving what’s possible when technology is used for good. From identifying previously unreported CSAM to detecting online grooming early, every advancement helps platforms deliver safer experiences and protect the most vulnerable users.

22.12.2025 19:56 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

At its core, T&S work is grounded in care, compassion, and a global commitment to user safety.

John Buckley, Director and Head of Child Rights and Safety at The LEGO Group, joined Safe Space to discuss what it takes to advocate for children inside some of the world’s largest tech companies.

18.12.2025 17:13 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image Post image Post image Post image

This year, platforms powered by Safer processed more than 317 billion files. That’s β€œbillion” – with a β€œB.” That number is incredible, and we’re proud of such broad implementation. Safer also detected over 4.4 million suspected CSAM files, helping protect children more effectively than ever before.

17.12.2025 17:46 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

In trust and safety, the job can feel *very* personalβ€”and that makes switching off tricky.

In this clip from Safe Space, @aaron.bsky.team (Head of Trust & Safety at @bsky.app) shares how he protects his well-being while working in one of the internet’s most emotionally demanding roles.

15.12.2025 18:17 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image Post image
11.12.2025 17:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image Post image Post image Post image

Trust & Safety is evolving fast. From AI-driven abuse and deepfakes to rising standards, user expectations, and new regulations, 2025 demands proactive detection and hybrid safety models. These trends reflect the collective challenges and opportunities ahead for every trust & safety team.

11.12.2025 17:33 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

As Dr. Rebecca Portnoff reminds us, a model’s effectiveness depends on everything around it: the humans, the processes, and the safeguards that hold it accountable.

🎧 Hear Dr. Portnoff expand on this topic in our Humans in the Loop webinar: https://teamthorn.co/3WXqFQZ

08.12.2025 19:59 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

You can design for an ideal community. But you have to moderate the real one. That means moderators need to learn from the community that exists, not necessarily the one they planned for.

🎧 Learn more in our webinar Humans in the Loop: Building Ethical AI for Content Moderation teamthorn.co/3WXqFQZ

03.12.2025 22:12 πŸ‘ 2 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

Not every moderation decision needs a human, but every system still needs human judgment.

As Dave Willner puts it, the real question is at what level can people add the most value?

🎧 Hear more from Dave in our Humans in the Loop webinar: https://teamthorn.co/3WXqFQZ

26.11.2025 18:10 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Safe Space, a Trust & Safety Podcast: Lauren Jonas | EP 9
Safe Space, a Trust & Safety Podcast: Lauren Jonas | EP 9 YouTube video by Safer by Thorn

When it comes to designing technology for teens, there’s no one-size-fits-all solution.

In our latest Safe Space episode, Lauren Jonas, Head of Youth Wellbeing at OpenAI, shares how her team approached new parental controls for ChatGPT.

https://youtu.be/s9QMo5LJS1Y

18.11.2025 16:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

β€œHumans vs. AI” is the wrong question.

It’s about designing balance in your system, where humans bring context, and AI brings scale. How are you building checks and balances into your content moderation process today?

https://teamthorn.co/3WXqFQZ

17.11.2025 14:51 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0