diego's Avatar

diego

@diegoenriquezserrano.dev

Web: https://diegoenriquezserrano.dev GH: https://github.com/DiegoEnriquezSerrano

6
Followers
12
Following
62
Posts
25.10.2023
Joined
Posts Following

Latest posts by diego @diegoenriquezserrano.dev

I'm sure he knows this but feels the need to dig his heels in on that vibe coding post. Either that or he's just a mediocre engineer without LLM assistance.

06.03.2026 22:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Yeah totally off, in my experience with mature codebases I'd say 50% of the work is auditing the current code and planning feature stories. Then you got debugging, code review and actually writing code filling out the rest. If you are good at planning, writing code is one of the smallest time sinks.

06.03.2026 22:45 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

"The interesting part is not the payload. It is how the attacker got the npm token in the first place: by injecting a prompt into a GitHub issue title, which the AI triage bot read, interpreted as an instruction, and executed."

bruv... πŸ’€

05.03.2026 21:31 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Back to the "syntax" point; I would add a caveat for langs that have drastically different paradigms from those of past experience. Unless you can get enough of an understanding through prompts so that you can still debug if there's an AWS outage then it's still worthwhile learning the 'hard way'.

02.03.2026 22:17 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

All-in-all I've been very vocal with criticisms of reckless/irresponsible/haphazard LLM use. However, I don't want to pass value judgements on those who do feel a sense of responsibility when incorporating LLMs into their tool set. If you're bottlenecked by reviewing their outputs, I think you're ok

02.03.2026 22:08 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I think this opinion, specifically wrt to learning necessitating struggle, stops applying, at least as meaningfully, after a certain point. I think it's important for juniors, but once you've learned to build complex code paths with abstractions, learning a new syntax is not a high value endeavor.

02.03.2026 22:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

πŸ‘€

28.02.2026 00:40 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

You say "anyone" as if presenting a novel method with evidence to support it, achieves the same goal and can be independently verified and endorsed by trusted groups is a low bar to clear.

If an unknown group appears without clearing any of the above, why would their methods be adopted by anyone?

25.02.2026 14:22 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Were this the case at all the tweet would still be up. But I'm really not going to waste any more time than this on someone like you.

24.02.2026 04:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Whatever the motivations of the developer(s) are/were, these things are fundamentally just flagrant spam.

23.02.2026 19:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Showing off a "clone" project you hand rolled is cool because it shows you have an understanding of a language's features and syntax as well as the ability to reverse engineer existing software. Showing off a vibecoded clone shows nothing other than you buying tokens for something no one needs/wants

21.02.2026 21:47 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

If you're complaining about bandwidth costs and you've got a stable home internet connection and an available device with storage, then it truly is a skill issue.

20.02.2026 00:31 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

"Professor White Boy " has perished. They found him self harmed at the bottom of an elevator shaft. Thanks for the fucking smiles, Professor

19.02.2026 02:54 πŸ‘ 5888 πŸ” 949 πŸ’¬ 42 πŸ“Œ 34

The reason to distrust them is the same reason you don't trust "arbitration" clauses. When your business is dependent on a steady flow of corporate compensation, you know your sponsors will only keep sponsoring you if they like how you present them.

18.02.2026 16:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Never trust anyone with an affiliate link. It's become a meme for YouTubers in particular to admit that they are being paid either monetarily or in kind by gadget companies but swear wholeheartedly that it doesn't impact their "reviews". We don't have to be a credulous public. People lie. Get wise.

18.02.2026 16:29 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

At risk of a block I think you should reread the statement and find the word "deadline" or even just the implication that he was under pressure from management. Ars is unionized and he even stated that he could have taken a sick day.

16.02.2026 14:45 πŸ‘ 12 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I'm seeing a lot of people shift blame to Ars and an arbitrary deadline, but he doesn't at any point indicate he was under a deadline. He also states "he should have taken a sick day" which implies he had the option to but didn't.

16.02.2026 14:34 πŸ‘ 15 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

We're shifting focus it seems. Not only is it not established that it was a python script he (or an LLM) wrote that failed, but it's also moot if we are to believe that claim that the hallucination came via a ChatGPT "interaction". What was the prompt in that interaction that yielded hallucinations?

16.02.2026 14:07 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I think it's important to note he doesn't mention the word "deadline" anywhere in his explanation. I think it's folly to shift the blame to Ars Technica when it's clear from Benj's writing and social media feed that he loves playing with LLMs.

16.02.2026 13:32 πŸ‘ 72 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

It sounds kind of convoluted though, doesn't it? I mean does simply pasting a text verbatim result in the LLM respond with a hallucinated version of it? There's details missing imo.

16.02.2026 13:25 πŸ‘ 13 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

> β€œDuring the process, I decided to try an experimental Claude Code-based Al tool to help me extract relevant verbatim source material.”

Sorry but can you explain why you needed an agent tool to extract material from a blog post that you ostensibly read yourself?

16.02.2026 03:55 πŸ‘ 103 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

I mean apparently this wasn't one of them if his claim of it being a one time use of an experimental tool is to be believed.

16.02.2026 04:57 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

In the name of transparency and also demonstrating a good faith effort in reestablishing trust, I think it's incumbent on you to actually name the tool and not just obfuscate behind an "experimental Claude-based tool". I mean this isn't a confidential informant, so why the need for secrecy?

16.02.2026 04:15 πŸ‘ 38 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

This is an exercise in critical thinking. Do you really believe someone enthusiastically posting earlier this week about their LLM "experiments" has only attempted to use it in their work on one highly unfortunate occasion? I don't. I think this reckless behavior is beyond the pale for a journalist.

16.02.2026 03:56 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Gotta be honest, I don't for a second believe the author's explanation. His excuse is that he was sick and used an LLM tool to extract quotes for the article. It's an incredible coincidence that this one time use just happened to be on a blog with anti-scraping systems in place.

15.02.2026 23:36 πŸ‘ 4 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

I think there's a place for LLMs, but only once you've reached expertise. You don't get to expertise by outsourcing your cognitive load to a machine. Struggling is learning.

15.02.2026 18:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I don't see an issue with using abstractions in software. I mean none of us are writing machine code after all. But it was always a truism that a good engineer understands the base layer of their abstraction. That seems to have gone out the window.

15.02.2026 17:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

if this LLM mania cycle doesn't end soon I'm going to seriously consider walking away from software entirely. everyone is obsessing over LOC and how many prototypes they're spinning up, often without any meaningful auditing and even a total inability to understand the code their agent generated. 🀒

15.02.2026 17:39 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

🀨

15.02.2026 17:36 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
It's been an extremely weird past few days, and I have more thoughts on what happened. Let's start with the news coverage.
I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down - here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be Al hallucinations themselves.

It's been an extremely weird past few days, and I have more thoughts on what happened. Let's start with the news coverage. I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down - here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be Al hallucinations themselves.

Yikes: guy who blogged about how an AI agent published a β€œhit piece” against him, says (human) reporters seem to have used an LLM to read (and misreport) his blog post

theshamblog.com/an-ai-agent-...

14.02.2026 01:47 πŸ‘ 301 πŸ” 99 πŸ’¬ 12 πŸ“Œ 21