The people vibe coding the feature are FAANG engineers making ~$500K so their job is to make it production grade and support it in production.
So yes, you were missing some context.
The people vibe coding the feature are FAANG engineers making ~$500K so their job is to make it production grade and support it in production.
So yes, you were missing some context.
If you believe this then its just a sign you donβt work in tech.
One topic Iβm noodling on his how product management evolves when execution is essentially βfreeβ.
When vibe coding the feature takes a shorter time than writing and getting alignment on the PRD, is that OK or does something need to change?
While I've been critical of PMs vibe coding deprioritized features, I heartily endorse certain uses.
A PM on my team now builds features in a custom version of our app on their phone before talking to the team. This beats PRDs, Figma mocks and prototypes.
I definitely felt I need to up my game.
Why are the only options compensation going up or staying flat. We could also make significantly more software for less pay if tech skills are commoditized.
If writing Node.js or SQL code just requires asking Claude to do it, why is the prompter getting paid six figures? Thatβs the question.
As a tech worker, itβs now form of denial bordering on malpractice to think AI canβt to do a decent chunk of what you currently get paid to do.
Even if you donβt believe it, the people who sign your paycheck do.
Jack Dorsey opened the floodgates; tech CEOs admit AI will cause job displacement. The theory that a senior with AI outperforms multiple juniors is now reality.
Question is how fast will displacement happen and how will humans upskill when Claude codes, Figma Make designs & Deep Research analyzes?
This has been inevitable since Oracle signed the deal with OpenAI for $300B of AI services over 5 years.
Oracle needs to build out data centers to handle the capacity but banks donβt think OpenAI will follow through so getting loans has been hard.
Massive cost cutting being the inevitable plan B.
Kristi Lynn Arnold Noem has been fired? How shocking. KLAN is gone.
Realized I now self censor when writing about AI productivity using this heuristic
1. General thoughts on AI productivity - post here
2. Detailed examples of how I use AI at work - mostly internal at work to avoid "you're automating yourself out of a job" style replies
3. We're cooked - group chats
So this explains why Ben Affleck had such great thoughts about AI and filmmaking. He founded a startup that filmmakers can use to leverage AI in the postproduction process to do things like mix and color, relight shots, and add visual effects.
It just got acquired by Netflix.
PMs trying to ship features due to being able to vibe code is an a sign of PMs who are poor at prioritization and have no idea what their job actually is.
So I agree with the post. Vibe coding makes sense for prototyping and better showing your ideas but besides that itβs a distraction.
This is one of the things that has occurred to me. I was giving detailed feedback on some effort at work and I realized it was such a cheap effort that if it turned out to not be valuable that it wouldnβt matter as it wasnβt that costly to build.
There is a significant reset AI has created.
Repeat after me; βabolish ICEβ is the mainstream position.
Iβve gone from being unsure of the impact that AI tools can have on product management to being asked to give talks on AI native product management at work.
There are a lot of transformative aspects of AI on the art of PM but also a lot of pitfalls and potential disappointments.
The question is what comes next?
For me personally, this is the first time in my career where I not only have a clear idea what startup Iβd build but also that I could build it without needing to do the loathsome Silicon Valley fundraising dance.
The more time I spend with AI tools, the more I feel they will decimate the current version of the tech labor market. Meaning ~10% of roles significantly disrupted.
A lot of tech skills considered valuable in 2021 are now just what computers do today.
The more I learn about AI tools and the companies behind them, the more Iβve gone from being skeptical about most things Anthropicβs CEO says to agreeing with almost everything he says.
I was forced to watch TV news when I had breakfast in a diner recently and I strongly resonated with the sentiment that being an intelligent person in America today is like being awake during a surgery.
Understanding cause and effect (if I hit the snooze button then Iβll get more sleep) and second order thinking (but then Iβll be late for work) are the basics of being an intelligent person.
Unfortunately we decided to elect the dumbest and most spiteful people we could find to the highest offices.
I made a comment about this a few days ago and the reactions on here were strangely negative.
This is obviously what he is doing and people are dying to stop people talking about his sex crimes.
I see people saying AI doesnβt have taste and that will be a moat for humans as AI agents take over a lot of knowledge worker tasks like writing documents or code.
I think that framing is close but not quite it. Todayβs AI agents lack judgement. Taste is a form of judgement but not the only one.
A data scientist at Cash App quite her job after Block laid off 40% of her coworkers and offered those remaining, retention packages equivalent to a 75% pay increase.
She said itβs dystopian to be forced to use AI tools that hasten the disappearance of the jobs we depend on for our livelihood.
Epic losing the App Store case against Apple but winning against Google which is technically more open is a great lesson in setting expectations.
Apple never set the expectation it was open while Google claimed to be but actually wasnβt in practice.
Trumpβs approval rating is so bad even racist uncles are having second thoughts.
Completely accurate assessment. No notes. 10/10.
She literally called the top and itβs been downhill ever since.
The VP of Research for post-training at OpenAI is going to Anthropic.
There is now a trickle of both users and employees choosing Anthropic over OpenAI. Sam Altman needs to stop this from becoming a flood.
Itβs almost comical how transparent OpenAI is being about the play being to let the government use ChatGPT to spy on Americans and kill people then claim the government broke its promise when it happens.