Iβve seen a lot of vibing through goal setting as well.
Iβve seen a lot of vibing through goal setting as well.
The weirdest thing I learned from friends this week is that some companies have leaderboards related to AI usage to celebrate the people using the most tokens (racking up GPU, power & water usage).
Sounds as absurd as a leaderboard of who's expensing the most Uber rides on their corporate card.
This woman at my job kept telling me I was too young to know the print world and dumb phones. She was shocked when I told her how old I was. Can never tell if itβs a compliment or insult when someone kept saying you (I mean I) look younger.
How malicious AI swarms can threaten democracy. The fusion of agentic AI and LLMs marks a new frontier in information warfare. www.science.org/doi/10.1126/...
That moment when youβre tired of AI and on the verge of taking a non-AI role, but know that if you just wrangle the cats and get a governed agentic tech stack going in your org and get a buttery smooth version in the hands of your users, youβll have one of the hardest wins in this AI wave.
This report in Nature on the costs of competing for & administering scientific grants is shocking: "In other words, European taxpayers will have spent more on the funding process than on the funding itself, and the scientific ecosystem has been drained." www.nature.com/articles/d41... π§ͺ
In some orgs, relationships are used as a gatekeeping differentiator among peers. When someone tells u to go to so & so cuz βthey have the relationships,β take note of the namesβ& build those relationships yrself. It pays off, esp when u need a straight-line path to data for your agentic AI products
π― canβt teach working through complexities. We can all βthink throughβ and βtheorizeβ the scenarios and how especially with GenAIβs help now, but complex environments/systems are not stable and all the micro decision making just canβt be simulated.
Would you hire them? If so, what would you do to help them up to the next βlevelβ?
Worth a watch:
Head of Signal, Meredith Whittaker, on so-called "agentic AI" and the difference between how it's described in the marketing and what access and control it would actually require to work as advertised.
Is everyone else also working through an AI βWild Westβ at your company? Building, but still wrangling with platform and tooling because there isnβt a stack the tech org has endorsed? (Iβm in a regulated industry)
In the last decade, a lot of design managers were hired to manage design systems.
Yes, agree. Just the logistics of securing space and setting up alsoβ¦.
High risk high reward problem, still needs buy in
VS
low risk low reward problem with exec buy in
Leaders are trained to be opinionated w/ a strong POV. In AI dev, a technical leader unwilling to experiment w/ diff approaches & implementation run the risk of βsubpar opinions strongly heldβ. For lean teams, itβs important to have > 1 expert, self-taught/trained, for technical decision making.
Iβm feeling you this morning, Ha. Not long ago I had to fly home to Asia as my dad almost didnβt make it. I hope youβll be able to take the time you need to process and ultimately celebrate his life and the pieces of him he left with you. Take good care.
I am sorry for your loss, Ha. Keeping you in my thoughts.
Agree. Iβve noticed MVP and scaling are expected to happen concurrently in some places, especially when execs are pressured to show some progress towards some AI product/proof. They exercise then become, sure letβs run faster but which part can you skim on/automate, which still remains critical.
It may seem to run contrary to the talent stack collapse narrative, but this simply surafces the perennial importance of org design if you want to run ever faster.
Building AI products in an enterprise setting will further magnify all the organizational issues you/your leaders thought you/they could put off - role ambiguity, unclear decision rights, every step of the way.
who typically use uxr to validate solutions to be incorporated in critical stages. IOW, probably same old cycle π€¦π»ββοΈ
Org will no longer have the βluxuryβ to not incorporate uxr early& often w/ AI product dev/value creation. Itβll take the same practitioners getting burnt thru a few dev cycles and them convincing jira factories
Airline tiers and add-onβs
I have a strong feeling generative AI is going to end up like crypto. Ethically questionable, solves a narrow set of problems, lots of broken promises that donβt live up to the hype and yet a bunch of people get incredibly rich along the way.
In my experience, boiling the ocean is usually precipitated by some inexperienced PM or leader wanting some kind of wins they can tell a big story about, quickly.
And the βfewer peopleβ that remains, the bar is βhighβ in that you have to have expertise but also a generalist. The output, esp GenAI output will be mediocre or good enough, itβs the beginning (questions, direction etc), the end (finesse, expert eyes, etc) and the system context that REALLY matter.
What defines a next level PM, in addition to questions asked?
Theyβre incentivized differently in the workplace.
What if your work doesnβt involve Figma?
#1 happens with or without AI supported prototyping π, in enterprise. The consumer space, absolutely.