๐ฃ๐ผ๐๐ ๐ฑ The organisations that thrive use AI to build more capable, more confident, more thoughtful engineers who happen to be more productive.
That's amplification worth aiming for. Understanding, judgment, and accountability remain distinctly human.
07.03.2026 16:01
๐ 0
๐ 0
๐ฌ 0
๐ 0
๐ฃ๐ผ๐๐ ๐ฐ What to measure instead:
Code review feedback patterns on AI-assisted PRs. Maintenance burden six months after shipping. Engineer growth trajectories. Knowledge distribution across teams. Technical debt trends.
These tell you if you're building capability or dependency.
07.03.2026 16:01
๐ 1
๐ 0
๐ฌ 1
๐ 0
๐ฃ๐ผ๐๐ ๐ฏ What actually matters:
Are engineers developing new skills and deeper understanding, or becoming more dependent over time? Can they handle increasingly complex tasks independently? Is knowledge spreading across the team? Is technical debt manageable?
07.03.2026 16:01
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ฃ๐ผ๐๐ ๐ฎ The temptation with AI tools: measure output. Lines of code written. Features shipped. Velocity increased. These are easy to quantify and look impressive on dashboards.
But they miss the point entirely. Output without understanding is technical debt waiting to happen.
07.03.2026 16:01
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ฃ๐ผ๐๐ ๐ญ Saturday reflection: measure capability growth, not just velocity.
From Part 2 of the AI Amplification Paradox: "The question shouldn't be 'are we shipping faster?' It should be 'are we building more capable engineers who happen to ship faster?'" ๐งต
07.03.2026 16:01
๐ 0
๐ 0
๐ฌ 1
๐ 0
'I Can't Be Arsed' by Mr.B The Gentleman Rhymer
YouTube video by Mr.B The Gentleman Rhymer
My mood right now, as supplied by a song by my friend Jim:
www.youtube.com/watch?v=oDh4...
06.03.2026 21:39
๐ 0
๐ 0
๐ฌ 0
๐ 0
๐ฃ๐ผ๐๐ ๐ฑ:
This model demonstrates how small businesses can have outsized impact on regional talent development through sustained commitment to sharing expertise.
Students don't need to look beyond the region for world-class opportunities.
Read: members.wnychamber.co.uk/magazine/win...
06.03.2026 19:30
๐ 0
๐ 0
๐ฌ 0
๐ 0
๐ฃ๐ผ๐๐ ๐ฐ:
The impact: Students learn teamwork, problem-solving, and communication skills alongside technical capabilities.
They see what innovative work looks like in practice, not just theory. They realise innovative, internationally-recognised work happens in Yorkshire.
06.03.2026 19:30
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ฃ๐ผ๐๐ ๐ฏ:
Our solution: Since 2022, sustained partnerships with Yorkshire colleges. Not one-off careers talks, but ongoing collaboration providing placements and delivering sessions on technologies colleges cannot cover.
Students gain hands-on experience with industry-standard tools.
06.03.2026 19:30
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ฃ๐ผ๐๐ ๐ฎ:
The challenge: Traditional education struggles to keep pace with rapid technological advancement. AI tools and modern development practices can't find space in curricula. Students graduate with knowledge gaps that employers must fill.
This costs the region talent.
06.03.2026 19:30
๐ 0
๐ 0
๐ฌ 1
๐ 0
Winter 2026 Magazine - Chamber Members
Chamber members can publish press releases on this website which are considered for inclusion in the quarterly magazine.
๐ฃ๐ผ๐๐ ๐ญ:
Keeping Yorkshire's tech talent local whilst providing global-standard education.
Featured in the Winter 2026 West and North Yorkshire Chamber of Commerce magazine: how educational partnerships address a critical challenge for Yorkshire's economy ๐งต
06.03.2026 19:30
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ฃ๐ผ๐๐ ๐ฑ Key insight: organisations that thrive won't be those generating the most AI-assisted code. They'll be those using AI to build more capable engineers who happen to be more productive.
Listen: dotnetcore.show/season-8/the...
Read: rjj-software.co.uk/blog/the-ai-...
06.03.2026 18:30
๐ 0
๐ 0
๐ฌ 0
๐ 0
๐ฃ๐ผ๐๐ ๐ฐ AI Amplification Paradox Part 2:
Practical frameworks for ensuring AI amplifies capability rather than erodes it. Warning signs of over-reliance versus productive use patterns. Four-level verification framework. Creating psychological safety around AI usage.
06.03.2026 18:30
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ฃ๐ผ๐๐ ๐ฏ The philosophy: fix paper cuts that make developers pause and search for solutions. Small frustrations don't break systems but drain productivity. Systematic removal compounds into significant developer experience improvements.
STS extended to 24 months now.
06.03.2026 18:30
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ฃ๐ผ๐๐ ๐ฎ .NET 10 with Mark J Price:
Microsoft's systematic approach to removing developer friction. Extension members (15 years in development), file-based apps eliminating project files, automatic model validation via source generators, WebApplicationFactory improvements.
06.03.2026 18:30
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ฃ๐ผ๐๐ ๐ญ Weekend learning: .NET 10's developer experience improvements and building AIresponsible teams ๐ฏ
This week covered both technical advances and frameworks for using AI tools responsibly in engineering organisations. Two pieces worth your time ๐งต
06.03.2026 18:30
๐ 3
๐ 0
๐ฌ 1
๐ 0
Full framework: rjj-software.co.uk/blog/the-ai-...
05.03.2026 21:01
๐ 1
๐ 0
๐ฌ 0
๐ 0
๐ฃ๐ผ๐๐ ๐ฑ Level 4: What could go wrong?
Security implications? Performance at scale? What happens when assumptions are violated? System interaction?
This is expert-level verification requiring deep contextual understanding.
AI can't reliably perform this analysis.
05.03.2026 21:01
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ฃ๐ผ๐๐ ๐ฐ Level 3: Why does it work this way?
What design decisions were made? What alternatives were considered? What assumptions does the code make? What are its limitations?
This is the level at which you can truly maintain and extend code. Where you catch architectural issues.
05.03.2026 21:01
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ฃ๐ผ๐๐ ๐ฏ Level 2: How does it work?
Can you trace the logic from inputs to outputs? Do you understand each function's purpose and how they interact? Could you explain this to a colleague without referencing the AI?
This is where understanding begins. Can't pass Level 2? Don't ship.
05.03.2026 21:01
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ฃ๐ผ๐๐ ๐ฎ Level 1: Does it work?
Does the code compile? Do tests pass? Does it fulfil requirements?
This is the minimum bar. Unfortunately, where many people stop. But this only tells you the code works today, in the specific scenarios you've tested. Not enough.
05.03.2026 21:00
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ฃ๐ผ๐๐ ๐ญ The verification framework every team using AI should adopt.
"Trust, but verify" sounds simple. But what does verification look like for AI-generated code? Here's the practical framework from Part 2 of the AI Amplification Paradox ๐งต
05.03.2026 21:00
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ฃ๐ผ๐๐ ๐ฑ The distinction that matters: appropriate heavy AI use (boilerplate, scaffolding, documentation) versus problematic heavy use (core business logic, security-critical code, architectural decisions).
Build understanding alongside output.
From Part 2:rjj-software.co.uk/blog/the-ai-...
04.03.2026 20:00
๐ 0
๐ 0
๐ฌ 0
๐ 0
๐ฃ๐ผ๐๐ ๐ฐ What teams should openly discuss:
When they use AI tools. How they verify AI-generated code. What prompts work well. When they choose not to use AI. How they caught problematic AI suggestions. What they're still learning.
Transparency enables collective learning.
04.03.2026 20:00
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ฃ๐ผ๐๐ ๐ฏ Senior engineers set the tone:
Being transparent about their own AI usage. Sharing examples where AI led them astray. Asking genuine questions about code rather than making assumptions. Praising good verification practices. Offering to pair with engineers learning these tools.
04.03.2026 20:00
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ฃ๐ผ๐๐ ๐ฎ The goal isn't to shame people. The goal is creating environments where:
๐ญ It's safe to say "I don't understand this" ๐ญ Using AI is openly discussed ๐ญ Questions are encouraged ๐ญ Mistakes are learning opportunities
Neither shaming nor hiding helps anyone improve.
04.03.2026 20:00
๐ 0
๐ 0
๐ฌ 1
๐ 0
๐ฃ๐ผ๐๐ ๐ญ Creating psychological safety around AI usage: why transparency matters more than perfection.
If engineers feel looked down on for using AI tools, they'll hide it. If they fear admitting they don't understand something, they'll ship code they can't maintain ๐งต
04.03.2026 20:00
๐ 0
๐ 0
๐ฌ 1
๐ 0