Steve Yegge already wrote this blogpost π«£
On a serious note, I finally gave Gas Town a try, and I can't tell if it's good or not, but I know I can't afford it, that's for sure. I ran out of budget in just 30 mins
Steve Yegge already wrote this blogpost π«£
On a serious note, I finally gave Gas Town a try, and I can't tell if it's good or not, but I know I can't afford it, that's for sure. I ran out of budget in just 30 mins
Reminder that Anthropicβs βradical leftist demandsβ are to ask the US to please not use their models to make autonomous killer drones or do domestic surveillance on American citizens.
How far has America fallen into fascism that these two requests are now considered βradicalβ?
Thatβs specific to building a product where the API is the feature. If you are building SaaS or a consumer application then you can force migrate people in place. As long as the UI is the same then most people wonβt care that the backend or API is cleaner now
I agree. But as a product person, this is why I say it takes willpower. I often have moments where I'm like "Ahhh we aren't moving fast enough".
But then I also need to remember that moving fast in a disastrous direction is much, much worse than moving at a reasonable pace in the right direction
It takes willpower to say "we aren't in a rush".
If last year, before AI adoption, the task took two weeks, and today, with AI help, it only takes one week, then that means there is an extra week for polish.
It doesn't mean we have to rush half baked code and tech debt out the door in a week.
But why must the scale of work be magnified by so much?
My point is that if I can do things so much faster with an AI agent, don't I have extra time to polish it and make sure it is made right?
Let's say AI makes me 3x faster. Why not plan to be only 2x faster, so I have lots of time to polish?
Scale it down to individual tasks now. If a dev can build something with an agent in an hour, what stops the dev from spending two hours instead of one?
Unless you operate in constant crunch time, itβs often a matter of willpower to spend an extra hour of boring work polishing before delivery
My theory is that anything you can build with agents in a week should be able to be rebuilt again in another week: in a clean manner, perhaps using another agent, wielded by a more experienced dev, with migrations of old / prototype data to new implementation
I honestly canβt take think pieces about AI data center water usage seriously as long as there are >2000 bright green 18 hole golf courses in arid areas of the US.
Even by conservative estimates, those things consume 10x the water of every data center in the US.
Iβm much less scared of technical debt these days now that Iβve personally seen Opus 4.6 executing some very impressive tech debt reductions: Angular to React migration, major refactors, framework upgrades, etc.
Tech debt has always been a willpower problem, but now it is no longer a time problem
There is tremendous power in sitting with a customer, asking them what improvements they want to see in the app, then trying it out with rapid iteration that does not require SWE time. Most of these experiments get thrown away, but some stick.
But I do agree that PMβs shouldnβt be vibe coding prod
To be clear I do not try to build the final implementation with vibe coding, and I do not expect engineering to base their final solution off my code.
I do expect that engineering will use my vibe coded solution as a live reference to clarify the written spec, as well as a visual reference for UI
My βvibe codeβ playground is a place to experiment with what ifs in a mocked up environment where I can iterate much faster than the engineering team.
I can work with customers to get it feeling right prior to engineering implementing the real version. This saves engineering churn and wasted time
Respectfully I disagree.
I make a lot of usage of vibe coding as part of my product manager work.
Context: I work at a small startup so we wear multiple hats, and I use the vibe coded feature as a hands on demo with customers to validate UI / UX prior to engineer implementation
The infamous question mark emails at Amazon are probably one of the biggest examples of this.
Nothing like a single "?" to set of a flurry of work and investigation lol
In the same category: the white supremacists who protest immigration, then afterward go for a self congratulatory post protest meal at an immigrant owned establishment serving cuisine from the immigrantβs home country
Thinking about whoever said that by accepting a meal from people and then arresting them ICE agents broke a cultural taboo that was invented by, like, the first humans to set up a tent
I still remember the scary moment I had a while back when I realized that Claude Code was capable of detecting that I run 1Password, request access to the vault and a password from the vault (shows a pop up confirmation prompt that a human must click to approve), then use the password on my behalf
I get tired of saying it, but Iβll say it again: the nature of oppression in America is that they workshop it first on Black, Latino, Asian & Native people.
But it is always, in the end, coming for everyone.
ChatGPT has gotten frighteningly good at being the interface for a search engine. The workflow on the left was easy, accurate, and nearly instantaneous.
It took a couple search queries and the tools filter (which most people overlook) to replicate and get Google to find what I was looking for
Yeah I voted for Suozzi. He was the best bad option on a terrible ballot.
Iβve been keeping my eye on him since. Heβs consistently catered to MAGAβs over the immigrant families and average New York City residents he claims to represent.
This isnβt a one time failure, nor an accidental mistake.
I donβt completely buy it. If they only wanted to rig elections it wouldnβt be that hard to just run a fake Democratic candidate, get NGP VAN access and grab data themselves.
That takes time though. I fear itβs far darker than just rigging an election. Itβs something they want to do sooner
The Venn diagram of people who donβt know who Fred Hampton is or how he died, and people who canβt believe conservatives have selective application of 2A, is probably pretty close to a circle
Thereβs a lot of people who are willing to abandon democracy if it means that capitalism stays dominant.
Thereβs a small number of extremely wealthy people who will abandon both democracy and free market capitalism if it means they can keep getting richer
The latter are using the former
Thatβs how every news org should be writing headlines. Thank you!
Pulling a slot machine lever is a single binary operation.
Prompting an LLM is much more similar to operating a claw machine. Yes you will sometimes grab the wrong thing or what you tried to grab will slip out of your grasp. But there is room for skill expression and skill dictates the results
βWorking for Palantirβ is easy to condemn, but I am increasingly uncomfortable with even being upstream or inadvertently enabling these type of orgs.
Open source licenses basically lack a βdonβt be evilβ clause, or even anyone to enforce such a clause
The sad thing is that I am becoming increasingly jaded about open source. It used to be about empowering indie devs with things that only large corps had.
Now itβs large and evil corps benefiting from stuff that diverse, underpaid, indie communities built. We hand them the tools to do bad things
Bingo.
Does anyone else remember the era when tech workers actually thought that maybe we could make the world a better place?
Now itβs full mask off evil as capitalism has finished consuming the open source and tech nerd communities that used to be cornerstones of what was built.
There is far more outrage from tech leaders over a wealth tax than masked ICE agents terrorizing communities and executing civilians in the streets. Tells you what you need to know about the values of our industry.
OpenAI exec James Dyett calling out the cowardice