Failing to put sufficient data controls in place would be extremely bad for business. This extends to agents, not just employees.
Code is a small, shrinking fraction of the cost of building a business. Your code isnβt remotely worth getting sued and subsequently losing all their B2B customers over.
05.03.2026 16:15
π 0
π 0
π¬ 0
π 0
If your source code canβt touch any 3p server, self-host agents on open-weight models. If youβre good with public cloud you can run on bedrock or equivalent.
In practice there are good reasons to build these things in-house, but risk of providers stealing your code is pretty low on the list.
05.03.2026 15:51
π 0
π 0
π¬ 1
π 0
This reads to me like a concession by DoW. The latest threat is to cancel Anthropicβs contract, which is what Anthropic was suggesting all along, and much less drastic than alternatives (labeling Anthropic a supply chain risk or invoking DPA).
27.02.2026 07:02
π 0
π 0
π¬ 0
π 0
Yeah. I did 2.5 days in Chongqing in January and didnβt feel like Iβd have gained much by staying longer.
Chengdu has more to see and do, even just in the city but especially if you do a day trip to a panda base or something.
25.02.2026 14:56
π 0
π 0
π¬ 0
π 0
All awesome cities. If it's your first time in HK I would do 2 days there, maybe 2/4/2 for an 8 day trip.
24.02.2026 22:36
π 0
π 0
π¬ 1
π 0
and free trade
20.02.2026 20:57
π 0
π 0
π¬ 0
π 0
and really really hate the Constitution
20.02.2026 20:57
π 0
π 0
π¬ 1
π 0
Yes, and its supposed impact on Trumpβs reelection in 2024. Slowing relative wage growth in 2025 did not cause people to vote for Trump in 2024.
15.02.2026 23:06
π 2
π 0
π¬ 1
π 0
These are real (inflation-adjusted) growth rates, not wages themselves. Wages for the bottom quartile are now growing more slowly than other groups, but theyβre still growing faster than inflation. If they were falling, the values on this graph would be below 0.
15.02.2026 22:54
π 1
π 0
π¬ 1
π 0
Most likely IMO is swarm mode/TeammateTool which is currently behind a feature flag github.com/mikekelly/cl...
(there's a video in the tweet linked from the repo)
31.01.2026 19:35
π 1
π 0
π¬ 0
π 0
I wrote the speech I *wish* Chuck Schumer would give tonight - as an actual opposition leader. Here it is:
"My fellow Americans: At this hour, an unrestrained force of militarized and violent federal officers is carrying out a project of ethnic cleansing in the streets of American cities."
1/14
25.01.2026 02:06
π 719
π 295
π¬ 17
π 76
okay hear me out
24.01.2026 21:34
π 3945
π 872
π¬ 36
π 25
I was thinking more of big batch ML jobs like ranking (to the extent non-real-time), labeling, forecasting, etc., where consumers should gracefully handle missing output/fall back to previous run.
Agree your example is still "production" (I would also call it "online") and needs on-call at scale.
23.01.2026 18:01
π 1
π 0
π¬ 0
π 0
Reliability/uptime is one reason you might want a SWE skillset in ML, but itβs not the only reason. Offline ML systems at scale can benefit a lot from performance optimizations and MLOps practices broadly, but will generally still have looser SLAs than online services, often meaning no oncall.
23.01.2026 17:41
π 0
π 0
π¬ 1
π 0
a lot of people don't realize that Christmas carols and traditions are actually folk memories of suppressed pre-Christian events, namely the end of the Third Age and the Fall of Sauron.
24.12.2025 22:05
π 2011
π 421
π¬ 30
π 30
incredible idea
dosaygo-studio.github.io/hn-front-pag...
19.12.2025 21:43
π 11
π 3
π¬ 0
π 0
Whether this is a good thing depends on how likely congress is to pass meaningful AI safety regulation.
I am personally less-than-infinitely pessimistic about this, which is more than I can say about most things that require congressional action.
17.12.2025 19:29
π 0
π 0
π¬ 0
π 0
I read this as an acknowledgment that (enforceable) federal preemption of state-level AI safety regulation is only going to happen if it comes with meaningful federal AI safety regulation.
Earlier this year, I would have been surprised by this result.
17.12.2025 19:27
π 0
π 0
π¬ 1
π 0
Congratulations Sean O'Brien and #teamsters!
05.12.2025 15:29
π 9
π 3
π¬ 1
π 1
a βSpace Forceβ, if you will
13.11.2025 14:27
π 11707
π 1819
π¬ 236
π 55
Note that all of the specifics here apply only to Waymo, not to self-driving cars in general.
I increasingly worry that regulators and the public will paint all platforms with the same brush, and was pleasantly surprised that that didn't seem to happen after the Cruise incident.
19/19
05.10.2025 21:13
π 1
π 0
π¬ 0
π 0
The reason I only say "fairly high" doesn't show up in the data at all - it's ~entirely due to the risk of a bad software update, security incident, etc.
This is unlikely, but it's hard to say exactly how unlikely, and it could be very high-severity if it happened.
18/
05.10.2025 21:13
π 1
π 0
π¬ 1
π 0
FWIW, I have fairly high confidence that Waymo is safer than driving or rideshare in all of its established metros, though not by enough to justify the cost from a pure QALY/$ perspective.
(This is based on my experience of Waymo being ~$5-$15 more than Uber/Lyft depending on distance.)
17/
05.10.2025 21:13
π 1
π 0
π¬ 1
π 0
From a measurement perspective, it's fine to use a metric that allocates >0 fatalities to Waymo for each of these incidents.
From a practical perspective, if these incidents cause you to think Waymo is less safe than you previously believed, I think that's the wrong conclusion.
16/
05.10.2025 21:13
π 0
π 0
π¬ 1
π 0
A nice thing about small datasets is that you can just look at the individual datapoints.
In both of the fatal Waymo-involved accidents to date, the Waymo was rear-ended while at a standstill or near-standstill: once in traffic, once while yielding to a pedestrian before a right turn.
15/
05.10.2025 21:13
π 0
π 0
π¬ 1
π 0
The caveat is that even a perfect driver (which, to be clear, Waymo is not) would be responsible for >0 fatalities in expectation based on a blameless approach.
That doesn't mean it's wrong - just a caveat that needs to be kept in mind.
14/
05.10.2025 21:13
π 0
π 0
π¬ 1
π 0
Now, returning to the 0.47 allocated fatalities
There's a lot to like about the author's methodology, for exactly the reasons he points out. In fact, I'm inclined to agree that blame shouldn't be accounted for in safety metrics.
13/
05.10.2025 21:13
π 0
π 0
π¬ 1
π 0
You could come up with a hierarchical approach to formalize this, but informally: a sufficiently-powered 79% reduction in crashes plus an underpowered 70% reduction in fatalities together provide practically significant evidence of a reduction in fatalities.
12/
05.10.2025 21:13
π 0
π 0
π¬ 1
π 0
But Waymo's report from this January (through 56.7M VMT, doi.org/10.1080/1538...) consistently shows >70% reductions in crashes.
The headline number is 79% fewer any-injury crashes (yes, regardless of fault). This one is highly stat sig because crashes are much more common than fatalities.
11/
05.10.2025 21:13
π 0
π 0
π¬ 1
π 0