Context: Knuth on Claude www-cs-faculty.stanford.edu/~knuth/paper...
Context: Knuth on Claude www-cs-faculty.stanford.edu/~knuth/paper...
I formalised the Knuth / Stappers / Claude theorem in Lean4. Claude for scaffolding and Harmonicโs Aristotle AI for the core proofs
This is just the construction Claude found, not all 760 constructions
(Disclaimer: theorems look plausible to me, but mistakes possible)
github.com/septract/cla...
Commands: #tcb (what's in your trust base), #tcb_tree (dependency graph), #tcb_why (why is this included?). v0.1.0, rough edges expected, feedback welcome
github.com/OathTech/lea...
Weekend project w/ Claude: in Lean, it can be hard to know which definitions you need to review to trust a theorem. So I built lean-tcb. It figures out your trusted computing base, ie the definitions that actually give a theorem its meaning (vs proof machinery the kernel checks)
I got curious whether Claude Code could handle a low-representation theorem prover like ACL2 - turns out yes! I proved a bunch of small to medium theorem, and for good measure built a MCP server, all in about 4 hrs. Iโve never used ACL2 before. Write-up here: mikedodds.org/posts/2025/1...
Thatโs true, and I think thatโs exactly why Claude does so well proofs. Itโs just I happen to know first hand that proofs are a particularly difficult *kind* of program
I wrote about Claude Code, which to my absolute astonishment is quite good at theorem proving. For people who don't know theorem proving, this is like spending your whole life building F1 engines and getting lapped by a Tesco's shopping trolley www.galois.com/articles/cla...
Screenshot of article text: โFormal verification today is very useful, but for most systems itโs very difficult to write the kinds of complete specifications that verification needs. However, other kinds of specifications are popular: for example, a test case is a kind of limited, partial specification. The key is that a test case is immediately useful, and doesnโt impose undue costs on the development team. We need to find ways to specify systems that have these virtues, and avoid the trap of imposing a complete and coherent view that fundamentally does not exist.โ
Screenshot of article text: โ Iโve come to think writing formal specifications is just a very difficult task. It requires a top-down view of the system that designers and engineers typically donโt have or, more importantly, need. In contrast, informal specifications can be ambiguous, partial, flexible. Informal specifications are intended as communication mechanisms between humans, and as a result they can be โwrong but usefulโ, and elide aspects of the system that are not of interest. This is a strength, but it also results in systems that canโt be easily formalized.โ
Screenshot of article text: โ I think systems are typically both designed from the top and grown incrementally. Most systems have some degree of top-down structure, but few systems have a mathematically coherent specification that covers every behavior. The effect is that most systems obey some formal specification for some core functionality, but if outside this core, we rapidly enter muddy territory where it is unclear what the system should do, or whether the designer should even careโ
Screenshot of article text: โ Itโs a formal verification clichรฉ that writing the specification tends to uncover most of the bugs in a system. To me, this suggests an analogy between specification and programmingโboth are tools for expressing what we want. In one way, this is a pessimistic thought: no tool can remove the burden of clarifying our ideas. But also, it gives me some hope. Programming is very difficult, but through careful tool design, weโve made it available to hundreds of millions of people. With luck and skill, perhaps we can do the same for specifications.โ
New Galois blog: โSpecifications Donโt Existโ. If we want to formally verify more systems, we need formal specifications, but most real systems are hard to specify for very deep reasons www.galois.com/articles/spe...
Iโm not sure how I missed this but itโs an extremely good article and you should absolutely read it. Itโs about formal methods, but anyone who cares about integrating research into industry will find it valuable!
I saw a *ton* of parallels with resilience engineering too :)
Hey :) Seems like a lot of people moved here from old Twitter and Iโm still catching up
A labyrinth icon, serving as a metaphor for the process of formal verification
At Galois, we often say things like: โFormal methods form the backbone of everything we do.โ
But what exactly are formal methods? How do they work, and why are they so important?
We created a handy reference page to explain: www.galois.com/what-are-for...
Text screenshot: I sometimes hear people claiming that formal methods are demonstrably better than the techniques software engineers mostly use today. The only reason formal techniques arenโt more popular (according to this theory) is that engineering teams are unaware, conservative, maybe put off by superficial difficulties like poor interfaces and documentation. I donโt think this is quite right. My observation is that engineers are mostly rational when thinking about costs and benefits, at least within the bounds of their particular systems and problems.
If a tool is not popular, itโs uncompelling to argue that everyone is just mistaken. At some point you should ask why the tool isnโt useful (at the current cost/benefit point)
A graph of costs and benefits plotted against each other. There is a line under which is โYour favourite under-appreciated formal methodโ. There are two arrows pointing orthogonally away: โbe cheaperโ and โbe more beneficialโ
New-ish @galoisinc.bsky.social blog: โWhat Works (and Doesn't) Selling Formal Methodsโ. The boring truth: engineers are rational and adoption is all about cost/benefit tradeoffs www.galois.com/articles/wha...
What actually works when selling formal methods in industry?
What doesn't?
The way Galois Principal Scientist @m-dodds.bsky.social sees it, many FM projects donโt pencil out not because clients are irrational, but because the cost/benefit tradeoffs donโt make sense.
www.galois.com/articles/wha...
c2rust is available on the Godbolt Compiler Explorer! c2rust is a tool we developed with Immunant that can convert nearly any piece of C code into compilable Rust godbolt.org/z/crsWEGEKM
Formal methods go great with AI www.wsj.com/articles/why...
I wrote about o3, the Frontier Math benchmark, and what it means if AI math keeps getting better
I do think a lot of people are in denial though!
I donโt think literally everyone should drop what theyโre doing. But my sense is PL research as a whole is significantly under-reacting to AI. So I suppose I think *some more* PL people should bet on AI (but maybe not you!)
Happy to mail you a couple. Email me, my address is on my website
I think youโve put your finger on the exact worldview mismatch because 5-10 years seems like an insanely long time horizon to me
Why constrain the grammar - just pull more samples and keep the ones that pass :p
Hot take for POPL: the PL community is still mostly in denial about AI. This is bad because PL+AI go great together
- PL can solve the hardest problem with AI - trusting the output it produces
- AI can solve the hardest problem with PL - finding enough engineers who can even use the tools
Iโm bringing these cute Galois stickers to POPL so if you want one, come find me
If I understand right, the private test set is only used during evaluation of the model - not available to the team doing the training
Seems almost certain itโs deliberately trained on math reasoning. The way the o-series models seem to work is by long CoT, with reinforcement learning to impose correct reasoning. Not much public about how o3 works internally, but Chollet has some speculation: arcprize.org/blog/oai-o3-...
A big jump on coding skill as well:
Re o3 - this is the big one for me. The Frontier Math benchmark is designed to be extremely difficult, and it has a private test set (no data contamination).
Today, o3 is v expensive. But seems inevitable itโll soon be cheap. If these results hold up, that means MUCH more powerful automated math