Congrats! See you there!
Congrats! See you there!
concurrent-ruby 1.3.6 is released: github.com/ruby-concurr...
It automatically prunes unused threads of a thread pool even when no new work is queued, migrates away from the deprecated "non-typed data" C API and includes several bug fixes.
Screenshot of a terminal demonstrating object allocation speedup. Ruby 4.0 is about 2x faster
One thing I'm really excited about in Ruby 4.0 is that object allocation is going to get a nice speed boost
Awesome stuff! ππ½
Congrats! ππ½
One last thing I should mention. This doesn't require that your app be ractor safe. The goal is to offload suitable work (just one task for now, I have ideas for at least one more) within the web server itself to ractors, so you can benefit from them without any changes to your app.
5/5
I also need to stop posting stuff like this when it's late for me π
I'll come back to answer any questions in the morning. I don't want to divulge too much, especially cause there's a lot more testing I need to do, which could result in major changes before I make it public π€·π½ββοΈ
4/5
While being inspired by it, I'm making a few design decisions different to Puma that seem to be working out. I won't go into detail yet, but I'll document them eventually. The obvious one is utilising ractors (I'm sure you could guess what for), which Puma should probably also do at some point.
3/5
Soooo many disclaimers:
- not a 1:1 comparison, just as close as I could get it for a benchmark baseline
- not a competition, just showing off an exciting experimental working PoC
- just a micro (but not trivial) benchmark using a toy app
- the RHS isn't 100% rack compliant, maybe 60ish%
- etc.
2/5
I really shouldn't be sharing teasers given how much work's left to do, and the good chance that this becomes another forgotten experiment, but I'm pretty happy with how far I got with barely any tuning (compared to the 20-year battle-tested Puma, which I've taken some inspiration from <3).
1/5
Thank you!
Ah it has `#<<`, but hasn't aliased it to `#send`: github.com/eregon/racto...
Also, probably doesn't matter, but it's the other way around in CRuby i.e., `#<<` is the alias.
Has ractor-shim implemented `Ractor::Port#send`? That could be why if not. Ref: docs.ruby-lang.org/en/master/Ra...
I don't have an answer, but I'm figuring that out myself with what I'm building (not ready to share yet). There's a point where the copying/moving/freezing (i.e., message passing) overhead might outweigh the parallelisation benefits. The answer will vary by the payload and the type of work.
Yes! I missed that. Clearly my Ractor coverage is lacking... @jhawthorn.com has already sent a fix: github.com/joshuay03/at.... I'll get that merged and released soon.
2. The thing that clicked for me was when @jhawthorn.com referred to a ractor as a no-GVL block at Rails World. Rather than chucking a whole Rails app in there, you could find CPU intensive code paths that would choke the GVL, and delegate them to a ractor or few. Think complex parsing of strings.
1. Could you clarify? I'm not aware of Ractors needing a C extension to work. Main limitation: C extension gems must mark themselves Ractor-safe. Shareable requirement on init/send exists, plus new semantics like move (may not last: bsky.app/profile/byro...). Docs: docs.ruby-lang.org/en/master/ra...
Announcing RactorPool: github.com/joshuay03/ra...
Extracted from a project I'm building with Ractors. Currently requires Ruby 3.5 (3.5.0.dev). Goal is to have it stable for Ruby 4.0, when Ractors will be less experimental π€π½
Ah thatβs unfortunate, that does seem like a tricky edge caseβ¦ Being able to move in my case seems to be quite a bit more performant than both deeply copying (not surprising), and making shareable and duping just the objects I need to mutate in the receiver. Although, I havenβt properly benchmarked.
I'm building something with Ractors and found a bug. Tried my bestβ’οΈ to fix it: github.com/ruby/ruby/pu...
Side note: Might just be me, but as a non-frequent contributor, building ruby/ruby and running tests seems to be much more convenient and efficient than it was a couple years ago.
Time for a holiday?
September flew by pretty quickly⦠I kicked it off by attending #RailsWorld!
www.linkedin.com/posts/joshua...
Was it the colonoscopy room?
Iβve added support for distributed deployments: github.com/joshuay03/di...
Couple of ideas to follow up on:
- Profile dashboard
- Profile comparisons
This is from a Datadog APM notebook I used to monitor the impact.
Haha yep, I have to remind myself from time to time as well.
Yes, at the very end of a before_forkβafter doing any necessary closing of connections, shutting down threads, etc.
Satisfying memory and CPU improvements after enabling Puma preloading + Process.warmup on one of Buildkite's services (our agent shard). Just rolled this out to all services - keen to see the broader impact!
It probably hasnβt been released yet. I would first check if the change has been back-ported to the `8-0-stable` branch, and then whether it was actually included in a release. If not, and itβs already on the branch, itβll probably be released in a 8.0.x at some point, else 8.1.x.