Not entirely accurate. It's more like crapping the pants. Then taking them off and wearing them like a shirt without cleaning them first.
Not entirely accurate. It's more like crapping the pants. Then taking them off and wearing them like a shirt without cleaning them first.
So the US military is just out there sinking more ships that posed no threat no threat whatsoever to anyone. Murdering more innocent people for.... What?
apnews.com/article/iran...
All seemingly good improvements but can I point it at opencode and have opencode be the backend for this?
For the browser side... the benchmarks are a bit different but here's how it stacks up against chromium's impl of web streams:
In the runtime columns? just highlighting standouts.. it was flagging some improved numbers after making a few code changes.
Fixed up some perf issues and benchmark bugs in the new-streams reference impl ... some highlights running comparisons on @nodejs.org @deno.land and @bun.sh ... note each column is just looking at the one runtime, not comparing runtimes against each other ...
... but it's worth noting that the performance beats Node.js streams at the baseline and matches Node.js streams at the worst case. Both this and node.js streams are still roughly 2x faster that the non-optimized web streams impl that's currently in Node.js
Just to prove that it can fit in easily with existing implementation/runtimes fairly easily, I ported the github.com/jasnell/new-... API to Node.js and updated FileHandle to support it github.com/nodejs/node/... ... it's an experimental draft currently so no official status...
@jasnell.me's blog post on streams made me finally add a buffer in Mastro's HTML streaming implementation.
This greatly reduces the number of chunks in the async iterable. Because there is a lot of per-chunk overhead, this brings our benchmark down from 25ยตs to 10ยตs ๐
github.com/mastrojs/mas...
My biggest question about this bullshit Iran thing is when the full Epstein files are going to be released.
So 3 US military and how many innocent Iranian civilians were killed because the orange turd continues to try to distract people from the Epstein files?
My biggest problem with them?
Every. Fucking. Thing.
ICE is state sponsored terrorism.
... And that's what we have now. I'll be taking a couple days to really go through Domenic's response and think through it. I hope others will do the same. Like I said, the alternative I put together is not a concrete proposal.. at least not yet. It's a demo that something else is viable.
This is fantastic. Exactly the kind of discussion I was hoping for. This kind of critical response is what makes (and keeps) doing this important standards work fun domenic.me/streams-stan...
And let me be clear: I never claimed that this new idea is perfect. I wanted to start a conversation...
Some could argue, "You can do that with ReadableStream and WritableStream tho!" ... an yes, technically you can have `ReadableStream<Uint8Array[]>` but there's no way to discover if a `WritableStream` can *accept* an array of inputs; or, more specifically, no way discover what kind of inputs period.
If it were `AsyncIterable<Uint8Array>`, then I would either (a) be forced to concat or (b) be forced to yield only one chunk at a time... I want to avoid both.
... I can then just skip the cost of concat (allocation + copy) and just perform a vectored write to the lower levels. This reduces the additional promise and microtask hops, skips copy and allocation costs, reduces roundtrips through the write loop, etc.. and overall makes the data pump faster
Here's an example of the principle in action: github.com/cloudflare/w... ... in this PR I'm changing the way Cloudflare Workers drains ReadableStream buffer on the internal path. Instead of reading one chunk at a time, if there are multiple chunks available I grab as many as I can synchronously...
... Web streams forces you down a path of one at a time because there's no way to discover if you can get many or not. Explicitly building this into the API makes it far easier to amortize the performance costs and build perf improvements into the design
Key challenge is that the language does not yet have an ultra efficient way to concatenate these chunks... And you don't always want to anyway. This gives the implementation options. You can give one, or give many based on what is available now and what you need...
Oh, and I was on a podcast talking about how we're finally getting close to enabling pointer compression in node.js
Have always loved @dominictarr.bsky.social's take on all this
... That all said, the alternative approach is not always faster, and in some benchmarks comes out slightly slower depending on the runtime. There are many factors so I don't want to make this purely about a perf difference. Personally, I think the API DX is the more significant factor.
... and Bun's numbers...
... Obviously, take all the usual "Benchmarks are usually fundamentally flawed" caveats into consideration when looking at those numbers. Also, these are the numbers when checked against Node.js. What do the numbers with Deno and Bun look like?
Here are Deno's numbers for the same benchmarks:
.. but it is worth noting that New/FastWS column.. that's comparing with Vercel's recent *excellent* fast-webstream research (vercel.com/blog/we-ralp...) ... that makes a massive improvement in Node.js but still this alternative approach can be 54x even better than that.
One bit that's worth emphasizing more on the new streams api discussion is the absolute cost of the current web streams model. Node.js' web streams impl has never been perf optimized but 90x faster is still ... something ...
So far the biggest complaint with my "new streams API" blog post is that it's clear I used Claude to help write it. I'm going to take that as a good sign. Hopefully folks will see beyond the excessive emdash and tricolon use and pay attention to the actual argument ;-)