Johan Falk's Avatar

Johan Falk

@magisterfalk

Working with AI strategy and public education. Learning more every day. Writer, teacher, presenter, thinker, learner.

125
Followers
36
Following
159
Posts
19.11.2024
Joined
Posts Following

Latest posts by Johan Falk @magisterfalk

Preview
Great summary and analysis of the Anthropic/DoW conflict This episode (from March 11th) gives an updated summary and analysis of the "supply chain risk" thing. Great insights, and some information on what Anthropic actually does for the DoW as well.

I'm following the conflict between Pentagon and Anthropic closely. This is a great up-to-date summary and analysis of the conflict, that also gives some information on what Anthropic actually enables for Pentagon.

aipodcastpicks.substack.com/p/great-summ...

11.03.2026 22:11 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
AI for beginners – 18-month update This blog post complements my book "AI for beginners", describing some ways the AI landscape has changed since the book was published. Still on top of the list: Understanding the pace of progress.

Summary of what's happened in the AI world the last 18 months: falkai.substack.com/p/ai-for-beg...

10.03.2026 10:59 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Scaling Laws x AI Summer: Who Controls the Machine God? Alan Rozenshtein, associate professor of law at the University of Minnesota and research director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law…

One of the most fascinating interviews I've heard on the DoW–Anthropic conflict, and I've heard a few now. Dean Ball doesn't hold back his language.

"Is it ok to swear on this podcast?"
"Hell, yeah."

pca.st/episode/173c...

#anthropic #DoW #DeanBall #ScalingLaws

06.03.2026 15:40 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Absolutely great conversation with Geoffrey Hinton After a slow start, this conversation offers great explanations of the state of AI and of AI risks, as well as some good laughs. Skip the 34 first minutes if you're eager.

This conversation with Geoffrey Hinton is absolutely wonderful. They cover important and partly really heavy subjects, but still made me laugh out loud repeatedly. Pedagogical, too!

aipodcastpicks.substack.com/p/absolutely...

03.03.2026 18:28 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
LLMs are world models They just model a different world than you think.

Unpopular opinion: LLMs are world models.

Full argument here. falkai.substack.com/p/llms-are-w...

03.03.2026 12:19 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Great summary of the dispute between Anthropic and the Department of War The Department of War is threatening really harsh measures if Anthropic doesn't agree to the DoW using their AI models for anything legal. Anthropic want to keep two use cases restricted.

Best overview I've seen of the row between Anthropic and the Department of War.

aipodcastpicks.substack.com/p/great-summ...

25.02.2026 22:16 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I clearly feel that Opus 4.6 is worse at using Swedish than 4.5. (Not the conclusions when working in Swedish, just in using the language correctly.)

Anyone else with similar/corresponding experience?

#opus #anthropic

08.02.2026 21:18 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Why watermarking deepfakes is useless, and what we should do instead The debate over deepfakes is misguided. We need ways of knowing the source of content, not what tools were used to create it.

Watermarks are useless.

Digital signatures are great.

So obvious when you think about it. Read more here: falkai.substack.com/p/why-waterm...

06.02.2026 18:16 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Using scenarios to forecast radical uncertainty In this second post about how AI affects knowledge and education, I argue that scenarios are necessary for forecasting AI impacts. I also suggest a method for selecting scenarios, and implement it.

My second blog post in a series about how AI affects knowledge, knowledge work and education. This one focuses on the necessity of using scenarios, and how to select them. 12 scenarios are consolidated into 3, for analysis in upcoming posts.

falkai.substack.com/p/using-scen...

03.02.2026 21:20 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Time Horizon 1.1 We’re releasing a new version of our time horizon estimates (TH1.1), using more tasks and a new eval infrastructure.

Found it! They've extended the test suite! Go METR.

More info here: metr.org/blog/2026-1-...

30.01.2026 17:37 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Measuring AI Ability to Complete Long Tasks We propose measuring AI performance in terms of the *length* of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doub...

What just happened at the METR chart for AI ability to complete long tasks?

Claude Opus 4.5 bumped from under five hours (50 percent pass) to 5:20. Also, GPT-5.1-codex-max was dropped.

metr.org/blog/2025-03...

#METR #opus #ai

30.01.2026 14:59 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image
18.01.2026 20:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Why AI governance as a path to AI safety might be a waste of time Peter Sparber, a former architect of Big Tobacco’s successful war against regulation, says that the big AI companies are using the same playbook as Big Tobacco did. And that it works.

Peter Sparber, a former architect of Big Tobacco’s successful war against regulation, says that the big AI companies are using the same playbook as Big Tobacco did. And that it works.

aipodcastpicks.substack.com/p/why-ai-gov...

18.01.2026 13:44 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I just saw an example of Claude Code working with models not from Anthropic, by setting up a proxy server routing calls to elsewhere. Pretty smart.

Anyone else seen something like that?

17.01.2026 22:18 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I can't help thinking that Claude Cowork in some sense is a step in the wrong for Anthropic. It's good, but a strength of Anthropic is their straight shot at coding and ignoring most of productification. If Claude Cowork becomes the same success as Claude Code, that might change.

#ClaudeCowork

14.01.2026 09:00 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

YES. More pages with this feature, please.

13.01.2026 14:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
The EU AI Act: What it really regulates (and what it doesn't) Next time someone claims the EU AI Act kills innovation, ask them what it actually says. Most haven't read it. Here's what it really contains. I argue that it is a base for innovating responsible AI.

Statements like "The AI Act is killing AI innovation" made me frustrated enough to summarize what the AI Act actually says. It's reasonable, and not nearly as harsh as people think. Having a single regulation is something the US should envy.

falkai.substack.com/p/the-eu-ai-...

12.01.2026 14:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
bad hairday
bad hairday YouTube video by Anders Bjarby

One of a thousand ideas from the mind of Anders Bjarby, who is possibly the biggest AI explorer in Sweden. (Yes, the Lovable crew included.)

www.youtube.com/watch?v=Y2cR...

#badhairday #randomidea #justdoit #experimentandlearn

08.01.2026 06:40 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Yoshua Bengio on AI risks One of the "god fathers" of AI talks about how he views AI risks – both he sees as important and urgent, and what risks he doesn't think are as urgent.

Great interview with Yoshua Bengio. Personal, factual and human. How should we balance the risks and benefits of AI, and how can we keep development at a sustainable speed?

aipodcastpicks.substack.com/p/yoshua-ben...

06.01.2026 22:21 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
LLMs as Trusted Mediators – A Path Beyond Coordination Problems? Using AI and cryptography to make cooperation rationalβ€”even when trust is impossible.

Maybe we can use LLMs for solving parts of the coordination problem. An LLM can be a near-perfect trusted mediator, that don't leak information beyond what is specified on beforehand (such as "no deal is possible" or "there is common ground regarding X").

falkai.substack.com/p/llms-as-tr...

03.01.2026 22:43 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
What do kids need to know about artificial intelligence?
What do kids need to know about artificial intelligence? YouTube video by Johan Falk

What does AI literacty actually mean? Here's a concrete answer. youtu.be/2PvMfhKfYdI

19.12.2025 23:19 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
What is sycophancy in AI models?
What is sycophancy in AI models? YouTube video by Anthropic

With the capable AI models we have today, I would argue that sycophancy now is a bigger problem than hallucinations.

This is a short and clear video explaining this phenomenon, how you can get better at spotting it, and decrease the risk of being affected by it.

youtu.be/nvbq39yVYRk?...

18.12.2025 21:44 πŸ‘ 0 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image

While we shouldn't take the results of GPT-5.2 on GDPval at face value, we really must start considering what continued AI progress means for our society.

And we need to do that yesterday.

falkai.substack.com/p/when-intel...

15.12.2025 21:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

falkai.substack.com/p/teaching-a...

14.12.2025 23:43 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
I started a YouTube channel for teachers about AI: Graspable.AI
I started a YouTube channel for teachers about AI: Graspable.AI YouTube video by Johan Falk

Here's a pitch for my new channel Graspable.AI, with short videos about AI for teachers world wide.

I'd love to hear what you think.

youtu.be/iHw-I3btmUA

12.12.2025 12:58 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Amanda Askell on philosophical AI matters Amanda Askell, who in my book is the guru of AI whispering (at Anthropic), answers questions from the public. You just got to love how the scene and even frame ratio breathes of late 1980's.

Watch Amanda Askell chew through heavy philosophical questions as if they were breakfast cereal in this interview. I’m especially fond of how the entire recording radiates an ’80s vibe – including the image ratio made for TVs that aren't flat.

aipodcastpicks.substack.com/p/amanda-ask...

09.12.2025 07:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
What would an AI investment crash mean for AI development? This post isn’t about whether the bubble will burst. It’s about what happens to AI development if it does.

Updates to my analysis on what a collapse of investments in AI might mean, with a fourth scenario: Microsoft, Apple, Amazon or Nvidia buys struggling AI startups.

This is, I think, the most likely outcome, and Nvidia could be the real winner.

More details here: falkai.substack.com/p/what-would...

22.11.2025 09:40 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
What would an AI investment crash mean for AI development? This post isn’t about whether the bubble will burst. It’s about what happens to AI development if it does.

I just published an analysis of what an AI investment crash would mean for technical development.

The short version: Google, with its deep pockets, would likely end up quite alone in the AI race on the American side of the Atlantic. China continues regardless.

falkai.substack.com/p/what-would...

19.11.2025 14:19 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
A quick guide to SB 53 California recently introduced a new law, SB 53, which is a smaller version of the much-debated SB 1047. This episode gives a great overview of what it means.

California just introduced AI safety legislation that is, very briefly summarized, self-regulation with some edge to it.

This podcast episode explains SB 53 in a very accessible way.

aipodcastpicks.substack.com/p/a-quick-gu...

11.10.2025 20:58 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Great analysis of whether there is an AI bubble There's a lot of talk about whether we're in an AI bubble or not. This is the best rundown I've found, using a framework with five warning signs. By Azeem Azhar.

This is the best analysis I've seen/heard, of whether there's a financial AI bubble going on right now.

Short story: Some warning signs, but not a bubble.

aipodcastpicks.substack.com/p/great-anal...

11.10.2025 07:43 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0