Happy New Year! Some thoughts on why AI will accelerate the need for platform products
open.substack.com/pub/aefidler...
Happy New Year! Some thoughts on why AI will accelerate the need for platform products
open.substack.com/pub/aefidler...
Yeah great point. I was using Claude. Chatgpt seemed worse for this. Which model do you usually use?
Thanks - appreciate your perspective
Thanks - good to know
Do you use both or did you just switch to windsurf? I was curious about why to have both
Are you finding that Windsurf is adding additional value over Cursor? What are you using each for?
LLMs can be really useful at combining qualitative and quantitive analysis (when looking at customer data, for example), but I find you trust its numbers at your peril.
How to reliably get LLMs to count correctly?
1. Establish Methodology: Ask what it's counting and how. Have it apply that method. Manually confirm.
2. Try different data: Once you think the method is right, provide different data and have it double check.
3. Spot check: Finally confirm accuracy.
Taking time to dig into the data is always worth it.
I just spent 2 days with a bunch of customer data, and the takeaways led to different and more specific solutions than what we had been considering.
Also, Claude is a huge help with this kind of analysis.
So, just carve out the time and do it.
So hard to do though. Just did this myself and got a lot of pushback because of the time and $$ - still think it's the right thing to do, but if I wasn't really committed, I probably wouldn't have.
Our team has been looking into how to measure the effectiveness of the #AI tools in our SDLC. We've settled on these for now:
1. Cycle time changes in Jira
2. Refined story points for a month
3. A detailed SDLC survey
I'd be curious how others are measuring #AIDevelopment practices in your #SDLC
What's your approach to getting #claude to write like you?
I always tell it to be "laid back and brief" - I also have a project with a bunch of writing samples, which works great in combination with this prompt.
It seems like the new "concise" style is still pretty verbose...
For me, it's based on team needs - either when it seems like someone on the team has an area where they would benefit, or when they request. I like practical courses with lots of real-world examples, since it tends to be hard to generalize about product. Looking forward to seeing the course!
I wrote some thoughts down recently on getting up to speed as head of #product at a new company. My process is all about building alignment around clear goals and then starting the drumbeat so we execute together. I'd love to hear how others think about this.
www.ashleyfidler.com/p/ramping-up...
Any favorite pieces on software testing?
I've been reading this one by @copyconstruct.bsky.social today. Lots of food for thought.
copyconstruct.medium.com/testing-micr...
If we were larger, I would be comparing our refined story points and monthly feature completion before and after, but we're not particularly rigorous about pointing. Anyway, I'm collecting articles, experiments, and thoughts on this topic.
How are people measuring the efficacy of AI for software? Especially with small to medium-sized teams?
Anecdotally, our devs are reporting about a 30% productivity gain with Cursor + home-grown tools. This is visible in a perceived uptick in velocity. We'd like to be more quantitative though...