It's been really exciting collaborating with the rust-analyzer folks on salsa (which we use internally in ty)! A lot of the improvements end up making their way into both projects, I think salsa has a bright future ahead.
It's been really exciting collaborating with the rust-analyzer folks on salsa (which we use internally in ty)! A lot of the improvements end up making their way into both projects, I think salsa has a bright future ahead.
I think teaching barriers first would reinforce the "nothing can be reordered past this barrier" idea, instead of the barrier being a way to *upgrade* a subsequent store to release the memory preceding the barrier. Reordering kind of misses the entire point of how atomic synchronization works.
I really dislike the idea of teaching atomic orderings in terms of potential compiler/hardware reordering optimizations. That's not how the spec defines them, and that mental model can get problematic quickly. Atomic orderings prescribe visibility relationships that are actually pretty intuitive.
Today, weβre announcing the preview release of ty, an extremely fast type checker and language server for Python, written in Rust.
In early testing, it's 10x, 50x, even 100x faster than existing type checkers. (We've seen >600x speed-ups over Mypy in some real-world projects.)
I was recently made aware that every single additional *cycle* in the hash function used by rustc increases its runtime by a whopping 0.25%
the patch just removes random closing brackets throughout the codebase
Very high quality code here too
an AI generated patch for a bug on github
No I don't want AI generated PRs on my repository thank you very much
Unfortunately all of my short blog post ideas evolve into books
Going to try and publish more than one blog post this year...
I wonder if the bitmask approach of github.com/cloudflare/t... has any potential here?
There's a 10us state machine based solution on a local Discord leaderboard π.
If you go in windows of three you can try cheating once by skipping the middle item, and every iteration after that you compare the last two items in the tuple instead of the first two. Though if you do it this way you need to handle skipping the first and second items as a separate case.
Nice to see people experimenting with alternative scheduling algorithms! nurmohammed840.github.io/posts/announ...
I see, does your implementation essentially translate '.' to '/'?
Libraries like github.com/ibraheemdev/... are probably the best middle ground because it allows people who don't want async to still benefit from the ecosystem. It will be slower but unless someone is willing to write a professional grade synchronous HTTP 1/2/3 implementation, it's your best option.
Async might be a net negative for a lot of people, but unfortunately those people are also less likely to be investing time into maintaining quality infrastructure for synchronous code, so it's kind of of an impossible problem.
I think all the talk about async being hard somewhat misses the point that high-throughput low-latency systems *are* hard, and the people building those systems are also the people developing and maintaining the async ecosystem.
Would this "just work" if matchit supported parameters with static prefixes? e.g. `/{foo}.bar`
I have a work in progress PR that implements this github.com/ibraheemdev/... but it needs some more work for conflict handling
Do I spy matchit's new routing syntax?
I wrote a bit about the design of papaya here: ibraheem.ca/posts/design.... This release will mostly be microoptimizations that contribute to a ~15% performance improvement.
I think there's room for everyone to be more open about async not being the end all and be all of performance.
There is a little bit of nuance around async being more expensive at lower concurrency and async synchronization primitives being more complex. async-sync interop is not free, and so there is a cost on users forced to use an async library in a synchronous context.
You missed the "Windows CI is failing though"
Maybe! Though papaya's higher baseline memory usage would be an important consideration.
The next version of papaya reaches over 2x the read throughput of dashmap!
The other part of this is you can't really ignore the other 0.2us measurement, because that implies a huge savings on userspace synchronization (channels, mutexes, etc.)
Exactly. epoll_wait is a user->kernel->user roundtrip just like a blocking recvfrom. If every IO op corresponds to a epoll_wait you're probably worse than blocking IO due to extra regostration syscalls. If 300 IO ops correspond to a given epoll_wait you're down to 1/300th of the expensive roundtrips
That's why saying that the cost of context switches caused by IO-readiness are equal between async and threads, as the benchmark repository did, is somewhat misleading.