Correction: 2025.
Correction: 2025.
Again, I wouldn't sign up for a marathon if I expected it to be that hot, but it's not like it's some exceptional situation that requires shortening the race.
For reference, the Badwater 135 takes place in Death Valley in July, so we're talking about running much more than a marathon in temperatures exceeding 100 F. That's exceptionally hot, but 86 F just isn't that hot.
I'm not saying I'd want to race in that, but running in those temperatures is well within the limits of human performance; mostly you need to slow down and hydrate a lot better. I've trained in hotter temps lots of times.
Moreover, the whole thing is just kind of odd because the forecast for LA is a high of 86 F (merrysky.net/forecast/los%2
0angeles/us
) and that's at 3 PM so it's cooler most of the race.
None of this applies to marathons, which are a strictly defined distance. Nothing wrong with racing 18 miles (~30K); I've done it myself. But it's not a marathon and just giving someone a finisher medal doesn't make it one.
For example, I ran the 2026 Ultra Tour Monte Rosa, where they cut off the last 15 or so miles because of a rock fall. educatedguesswork.org/posts/utmr/
. And in fact it doesn't count as a finish for some purposes, like Hard Rock qualifying. hardrock100.com/hardrock-qua...
The finished or not question can be a bit confusing in some events like mountain races where the course is always a bit fluid and organizers might need to change or shorten the race for reasons outside their control.
Some real map/territory confusion in this report by the LA Marathon to give people medals if they drop out after 18 miles (because it's hot). The finisher's medal commemorates that you finished, but it doesn't make you a finisher.
www.nytimes.com/2026/03/07/u...
Now up on the newsletter: Let's build a tool-using agent
In this post, I walk through in some detail how AI agent tool calling works, including digging into the inputs and outputs of the LLM before they get translated into a coherent-looking JS API.
educatedguesswork.org/posts/tool-c...
๐ฃ New from KGI: Age Assurance Online explores how age assurance systems work and their tradeoffs in accuracy, circumvention, availability, and privacy. A must-read for policymakers, service providers + users to understand the consequences of these systems: kgi.georgetown.edu/research-and...
The standard way to avoid cross-protocol attacks is now to use ALPN In the TLS handshake.
Now up, part II in my series about transparency systems: Certificate Transparency in Reality
educatedguesswork.org/posts/transp...
Welcome to Bluesky @rtfm.com !
Folks might recognize EKR from such hits as TLS 1.3, Let's Encrypt, and being Firefox CTO. He also writes a really good blog. Y'all should all follow him.
Of course, actually deploying this in practice turns out to be a lot harder than it sounds, which I'll get to in the next post.
educatedguesswork.org/posts/transp...
3. Site operators can then download all certificates, make sure they match the consensus, and then check for bogus certificates in their name. Mission accomplished.
If all goes well, this gives you a closed loop that makes it impossible to surreptitiously issue a certificate. 1. The consensus system requires the CA to commit to all its certificates. 2. Clients verify that certificates have been committed to.
The post has more detail, but the idea behind the proof is to show that there is a path from your certificate back to the root of the true, thus demonstrating that the tree was computed over your certificate.
Finally, when you go to the Web server, it proves that it's certificate matches the summary. What this means technically is that it gives you a Merkle inclusion proof that goes back to the root.
Next, you need some mechanism whereby each element in the system can assure itself that it has the same summary as everyone else (this is actually the hard part).
The standard solution here is what's called a consensus system. Effectively, you compute a summary of all the published certificates (typically by assembling them into a Merkle hash tree).
For instance, if the CA has it on their web site and sends it to clients but not to sites when they check, then the system breaks down.
The first step is to have the client (i.e., the browser) check that the certificate was published, thus hopefully forcing the CA to publish it. But now we have to confront the definition of "publish". How do we know the CA published to everyone?
The challenge here is ensuring that CAs are actually publishing every certificate. If your concern is that the CA was intentionally misissuing, it might just choose not to publish the bogus certificate.
A misissued certificate can be revoked, and, if investigation reveals improper practices, browsers might choose to distrust the CA, thus rendering all of its certificates invalid.
The basic idea is to require every certificate has to be published, thus allowing sites -- or services on their behalf -- to detect (at least in principle) when a certificate has been misissued
The solution that the ecosystem has converged on is something called a "transparency system", and in the case of the WebPKI "certificate transparency". Instead of trying to prevent misbehavior by CAs, a transparency system tries to make it detectable.
When paired with some fairly public failures by WebPKI certificate authorities, this creates an obvious problem.
Most real-world authentication protocols rely on some kind of trusted authentication service to attest to endpoint identities. The obvious problem here is the word "trusted"; if the authentication service misbehaves, then the whole system breaks down.