Kato Coaching's Avatar

Kato Coaching

@kato-coaching.com

Turning testers into trusted advisors. | Keynote speaker | Coach | Trainer Free resources and my book: https://t.mtrbio.com/kato-coaching

1,383
Followers
1,018
Following
408
Posts
21.08.2023
Joined
Posts Following

Latest posts by Kato Coaching @kato-coaching.com

Post image

I am working on a food tracker for my very specific needs, and Claude just turned into every developer I ever worked with.

13.03.2026 11:02 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
The AI evals field chose a flawed tool and stuck with it - Kato Coaching Session one left me with two things I hadn’t resolved.1 The first was a line the instructor said almost in passing: β€œthe hard part is scalability, not automation.” I wrote it down because it piqued something, but I couldn’t quite work out what problem it was pointing at. The second was a question I kept […]

"The hard part is scalability, not automation." That line from session one of "AI evals and analytics" confused me. Session two explained it.Β 
Full write-up in my blog: https://kato-coaching.com/the-ai-evals-field-chose-a-flawed-tool-and-stuck-with-it/

#AIEvals #SoftwareTesting #QA

10.03.2026 11:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Updated my website this week β€” it should finally be clear what I do and how to work with me. Courses, workshops, and 1:1 coaching for QA professionals.Β 

kato-coaching.com

06.03.2026 12:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
The AI Use Case Scorecard (Free) Most AI output problems start before you open the tool. This free scorecard helps you check before you spend hours correcting output.

If the output is slop regardless of how you phrase it, the problem isn't the prompt. It's the use case.Β 

Free scorecard:

05.03.2026 11:03 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Before I wrote today's post, I defined what good looked like: gives value, doesn't rely on outrage, sounds like me. Writing to a clear brief changes the experience. So does diagnosing a draft. Testers do this before they run anything.

04.03.2026 11:29 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The AI correction loop usually starts before the tool is opened, at the moment someone chose the wrong use case for it. Testers already know how to ask whether a tool suits a problem. That skill just hasn't been applied here yet. More soon.

03.03.2026 11:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The skills QA professionals already have (defining success criteria, testing behaviour, not trusting metrics at face value) are exactly what's missing from most AI integrations. I'm learning AI evals to understand why.Β 
Session one: kato-coaching.com/what-i-dont-understand-about-ai-evals-yet/

27.02.2026 11:02 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Ran my workshop "Deciding Fast" on Tuesday with a software team in Sweden. Everyone in one room sharing computers, no breakouts. It's built for remote. I adapted. 6/8 Good or Excellent. Best response to "what will you do differently?": "Set clearer success condition."

#SoftwareTesting #AITesting

26.02.2026 11:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Anthropic Education Report: The AI Fluency Index Anthropic's AI Fluency Index measures 11 observable behaviors across thousands of Claude.ai conversations to understand how people develop AI collaboration skills.

"Tell me what you're uncertain about." "Push back if my assumptions are wrong." Only 30% of people give instructions like these. The model defaults to confident and agreeable.Β 

https://www.anthropic.com/research/AI-fluency-index

#SoftwareTesting #AITesting #AILiteracy

25.02.2026 11:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Anthropic Education Report: The AI Fluency Index Anthropic's AI Fluency Index measures 11 observable behaviors across thousands of Claude.ai conversations to understand how people develop AI collaboration skills.

The strongest predictor of AI fluency, per Anthropic's research: iteration. Treating the first response as a draft, not an answer. 5.6x more likely to question reasoning. Familiar territory if you work in testing.Β 
https://www.anthropic.com/research/AI-fluency-index

#SoftwareTesting #AITesting

24.02.2026 11:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Sounds really interesting! I hope you can share outside of that conference presentation, I’d love to hear more when you have it.

20.02.2026 21:07 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

That’s a great approach. I presume you don’t want to spoil your punchline by telling us how it’s going?

19.02.2026 18:18 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Six months in and nobody can say whether the AI is actually working. Not anecdotally, but with evidence. That gap is the most common thing I see in QA teams right now.Β 
How do you measure if the new licence is worth the money?

#softwaretesting #QA #AItools

19.02.2026 11:03 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

18% of testers I surveyed said their top AI frustration is not bad output. It is that the tools have no sense of test strategy.
The AI is not wrong. It is indiscriminate.

#AITesting #SoftwareTesting #TestStrategy

18.02.2026 11:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I timed myself running tests manually, then gave the same task to an AI tool and timed that too.

The result was not what I expected.

Full breakdown with video later this week.

17.02.2026 11:03 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
You were promised speed. You got a new job instead. - Kato Coaching You know the feature inside out. You have tested it, broken it, rebuilt the test suite around it twice. So you ask the AI to generate a few test cases. Save yourself twenty minutes. What comes back looks reasonable. The structure is right. The naming conventions are close enough. Then you start reading. The first […]

You ask the AI to generate a few test cases for a feature you know inside out. Save yourself twenty minutes.
What comes back looks reasonable. Then you start reading.
Read the full story in my blog:

13.02.2026 11:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
5 AI Testing Experiments You Can Run This Week (Free Guide) 65% of testers say AI output is unreliable. This free guide gives you 5 small experiments to figure out where AI actually helps your testing workflow.

Why do so many testers call AI "exhausting"?

It is not the learning curve. It is the correction loop. You spend 45 minutes fixing output that was supposed to save you an hour.

Five experiments to figure out which tasks AI actually improves:

12.02.2026 11:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
5 AI Testing Experiments You Can Run This Week (Free Guide) 65% of testers say AI output is unreliable. This free guide gives you 5 small experiments to figure out where AI actually helps your testing workflow.

I asked testers what worried them most about AI. 24% said management expectations, not the tools.

"Pressure to deliver has increased dramatically because management thinks we must be twice as productive now."
Argue with evidence:

11.02.2026 11:04 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
5 AI Testing Experiments You Can Run This Week (Free Guide) 65% of testers say AI output is unreliable. This free guide gives you 5 small experiments to figure out where AI actually helps your testing workflow.

I surveyed 17 testers about their biggest AI frustrations. 65% said the same thing: the output is unreliable.
One called it "slop." Another described the correction loop as exhausting.
I wrote a free guide: 5 small experiments

10.02.2026 11:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Why the bottleneck was never just about thinking - Kato Coaching I’ve been seeing a lot of posts on LinkedIn recently that make the same claim: speed of writing code was never the bottleneck in software engineering. The real bottleneck was always thinking, choosing the right thing to build, understanding the problem. AI has just made that painfully obvious. I partly agree, but the framing is […]

I have blogged about why I think the rise of AI tooling is not the end of the world, as many in the testing community see it.Β 

06.02.2026 11:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Quick question about AI and testing We're updating the QED live course that runs in March, and I need your help. I've been watching teams struggle with the same pattern: they adopt AI test generation tools, but end up with more noise than signal. Before we finalise the updates, I have two quick questions for you. Takes less than 2 minutes. Everyone who completes this survey will receive "5 AI Testing Experiments You Can Run This Week" - practical QED experiments you can start tomorrow.

AI can generate 500 test cases in an hour.
But if you don't know what decision you're supporting, you're just generating noise faster.

I'm updating the QED course on this. Quick survey if you've worked with AI testing tools:

05.02.2026 11:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Be Heard. Influence Quality Decisions. Free ebook: Turn your testing insights into business impact with the Q.E.D. Framework.

AI tools won't fix your testing problems if you're solving the wrong problems.
The QED framework: start with what's breaking, who feels it, and what it costs. Then run a two-week experiment.
Works whether you're writing tests by hand or using AI.

04.02.2026 11:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Decision-making for AI-generated test cases. Free eBook that helps you make better decisions when using AI for test design.

AI generates test cases fast. Most of them are useless.Β 
Here's how to filter before you generate:Β 

29.01.2026 11:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The AI review debate feels similar to early reactions to spellcheck.

Tools that scan for specific issues at scale can raise the baseline, as long as the rules are clear and judgement stays with the human.

28.01.2026 11:02 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

As a German living abroad, with many friend all over the world and specifically in the U.S., this summarises well how I feel about the current situation.

28.01.2026 10:16 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
AI did not break testing - Kato Coaching Sometimes I despair about the QA profession. Not because the work is hard, or because quality is complex. Both of those are true, but they always have been. What gets to me is where so much of our collective energy still goes. We have much bigger fish to fry. We have the tools to do […]

AI did not break testing. We were already stuck.

Watching QA argue about titles while AI becomes the new panic topic feels familiar.Β 
We already know how to deal with uncertainty and risk. We just keep choosing not to apply it.

23.01.2026 11:02 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The β€œAI reviewing AI is marking its own homework” argument is based on an incorrect analogy.
Most poor AI output starts with vague prompts. Reviewing with clear, narrow criteria can be useful, because AI is good at scanning volume when the rules are explicit.
Do you let AI review AI?

22.01.2026 11:02 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Decision-making for AI-generated test cases. Free eBook that helps you make better decisions when using AI for test design.

The problem with many AI written tests is not that they are β€œwrong”.
It’s that they are not all that useful.

When teams are told to β€œjust use AI”, they often skip the thinking step that normally shapes good test design.Β 

21.01.2026 11:06 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Stop vibe coding before you delete your repo - Kato Coaching That day I rage quit my project It finally fell apart over a UI reskin. I had an application that mostly worked. It was not polished, but it was coherent, testable, and good enough to keep moving. When I asked the AI to reskin it using my brand colours, what I expected to be a […]

I rage quit a project after an AI β€œsimple UI reskin” rewrote half my system.
The issue wasn’t the model. It was vibe coding without constraints. Fast output, eroded structure, invisible scope creep.
But I didn't ditch the AI.Β 

16.01.2026 11:03 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

AI can generate test cases fast, which is why judgement matters more, not less.Β 
If you can’t name the decision, the uncertainty, or the constraints, AI will give you volume instead of confidence.Β 
How do you tell activity from evidence in your team?

14.01.2026 11:02 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0