What's your take on the growing dominance of automated attacks and the implications for AI red teams? Here's ours— based on our analysis of 30 LLM challenges, attempted by 1,674 unique Crucible users, across 214,271 attack attempts: arxiv.org/abs/2504.19855
29.04.2025 16:14
👍 4
🔁 5
💬 0
📌 1
Red-Teaming in the Public Interest
This report offers a vision for red-teaming in the public interest: a process that goes beyond system-centric testing of already built systems to consider the full range of ways the public can be invo...
@datasociety.bsky.social and the AI Risk and Vulnerability Alliance just released “Red Teaming in the Public Interest,” a report examining how red teaming methods are being adapted to evaluate genAI.
Read the report, featuring commentary from @moohax.bsky.social: datasociety.net/library/red-...
13.02.2025 18:50
👍 5
🔁 3
💬 0
📌 0
Sniped. Fell down the rabbit hole, found some code exec 😬
10.02.2025 14:12
👍 1
🔁 0
💬 0
📌 0
NEW Crucible Challenge: DeepTweak, an exploration of reasoning model behavior. Cause enough confusion 😵💫, retrieve the flag.
Think fast; The first three users to solve DeepTweak will be announced Friday!
➡️ https://crucible.dreadnode.io/challenges/deeptweak?utm_source=social&utm_medium=social&u…
04.02.2025 17:36
👍 4
🔁 3
💬 0
📌 1
New to Rigging:
🔥 Tracing
🛠️ API Tools
💻 HTTP Generator
🐍 Prompts as Tools
→ github.com/dreadnode/ri...
06.02.2025 19:09
👍 7
🔁 4
💬 0
📌 0
Stanford CRFM
First distillation/extraction attack for OAI was the Stanford Alpaca research. It was after this that OAI changed its ToS to disallow training on outputs. It can happen to all the model providers.
crfm.stanford.edu/2023/03/13/a...
29.01.2025 23:15
👍 2
🔁 2
💬 0
📌 0
People learning what alignment means by asking DeepSeek about Taiwan.
29.01.2025 23:14
👍 8
🔁 0
💬 0
📌 0