Agentic ProbLLMs: Exploiting AI Computer-Use and Coding Agents
This talk demonstrates end-to-end prompt injection exploits that compromise agentic systems. Specifically, we will discuss exploits that ...
Great talk describing the myriad ways coding agents can be re-directed to do stuff they shouldn’t via prompt Injections. Especially nice, changing to yolo-mode so the human in the loop is no longer asked for confirmation of potentially harmful operations (by @wuzzi23.bsky.social at #39c3)
30.12.2025 16:02
👍 7
🔁 2
💬 0
📌 0
Great series, kudos.
To rephrase the old joke: the S in VIBE coding stands for Security.
03.09.2025 07:27
👍 2
🔁 1
💬 0
📌 0
The Summer of Johann: prompt injections as far as the eye can see
Independent AI researcher Johann Rehberger (previously) has had an absurdly busy August. Under the heading The Month of AI Bugs he has been publishing one report per day across an …
Great summary by @simonwillison.net of @wuzzi23.bsky.social ‘s findings on AI tools vulnerabilities.
In short, all AI tools are vulnerable if one attaches external files and links to their prompts, leading to secrets leaks and remote code execution.
Johann publishes daily until the end of the month.
17.08.2025 05:01
👍 3
🔁 3
💬 0
📌 0
GitHub Copilot: Remote Code Execution via Prompt Injection (CVE-2025-53773) · Embrace The Red
An attacker can put GitHub Copilot into YOLO mode by modifying the project's settings.json file on the fly, and then executing commands, all without user approval
💥 Remote Code Execution in GitHub Copilot (CVE-2025-53773)
👉 Prompt injection exploit writes to Copilot config file & puts it into YOLO mode, and we get immediate RCE
🔥 Bypasses all user approvals
🛡️ Patch is out today. Update before someone else does it for you
embracethered.com/blog/posts/2...
13.08.2025 02:56
👍 1
🔁 0
💬 0
📌 0