More competitive than I expected:
cmdstat_bitfield:calls=10020,usec=4535,usec_per_call= 0.45
cmdstat_incr:calls=10020,usec=3670,usec_per_call= 0.37
I thought all the arg parsing would make it a lot slower
Honestly, I like your solution more, but you can abuse bitfields with overflow too!
```
> BITFIELD b OVERFLOW WRAP INCRBY u1 0 1
1) (integer) 0
> BITFIELD b OVERFLOW WRAP INCRBY u1 0 1
1) (integer) 1
9> BITFIELD b OVERFLOW WRAP INCRBY u1 0 1
1) (integer) 0
```
Some things kiro did:
1. Got frustrated at TCL, wrote it's own TCL framework
2. Wrote custom scripts to run valgrind and then wrote a readme for them
3. Wrote a brand new workflow.yaml file to include *JUST* the new test
Some things kiro did not do:
1. Actually notify the bio queue on shutdown
I do love kiro, but sometimes it feels like a toddler. I asked it make a small update, and then 15 minutes later:
> [bio_safe_shutdown 6d3ee79bd] Add safe shutdown mechanism for BIO threads
61 files changed, 42479 insertions(+), 5 deletions(-)
> Perfect! The implementation is complete.
The fact the url is "contact-sales/claude-for-oss" is really turning me off from recommending it to folks, I'm sure it's six months and then sales calls.
I wonder if generative AI is going to kill Amazon's writing culture. A doc that would have normally taken 1-2 weeks of thoughtful revision, can now be written by the frontier models with some MCPs to collect the data. The point of writing was to bring clarity. It's so easy to skip to the end now.
We are considering a RESP4 (or maybe an extension to RESP3), github.com/valkey-io/va.... Specifically to better address multiplexing and improve performance. Part of the reason we're trying to have our own ecosystem.
A goal of Valkey is to mostly be a superset of the Redis APIs, but we also don't want to be constrained by decisions that Redis makes. We added new observability APIs in Valkey, and without client support it's harder for end users to adopt.
It's AWS funding most of it, it's not a secret. We actually started it before the fork, to try to have a more unified interface across clients. We've continued working on them after the fork.
🎉 Don’t miss out! The Unlocked Conference hits Jan 22 in San Jose, CA! #Valkey sessions you’ll love:
- Valkey in production
- Performance improvements in Valkey 9.0
🔥 Special deal: Register for just $99 with code LF99 (normally $200)!
✨ Spots are limited unlocked.gomomento.com/events/unloc...
I love small intimate conferences to spark creatively. Which is why I'm co-hosting a large scale performance focused conference in San Jose early next year with Momento. We have a lot of cool speakers signed up, come join if you're interested.
You can register at unlocked.gomomento.com
Last Friday, #Valkey released fixes for 7.2, 8.0, and 8.1 to address some Lua vulnerabilities that may allow RCE from an authenticated user. It's a good time to remind folks to always lock down your Valkey instances, even in secure environments. See github.com/valkey-io/va... for all the details)
I really enjoyed a talk by Harkrishn Patro, which dives into how the clustering system works in Valkey www.youtube.com/watch?v=P6Cb....
If you weren't able to attend the recent Valkey event in Amsterdam, maybe you missed the Keyspace notification, you can see all the talks here www.youtube.com/playlist?lis... with slides posted valkey.io/events/keysp.... Check it out!
Today we announce DocumentDB has joined the LF with support from Microsoft, AWS and many more. Check it out.
www.prnewswire.com/news-release...
That's basically correct. It started out of customer conversations using ElastiCache without really understanding the nuanced durability/consistency tradeoffs. Once we explained it to them, many of them still chose to keep yolo'ing it though.
For cluster mode, we documented the procedure here for Valkey valkey.io/topics/clust..., but nothing has changed with respect to Redis AFAIK here.
I see some mention of using `CLIENT PAUSE`, but there is a safer variant in `FAILOVER` redis.io/docs/latest/... with standalone distributions. It will automatically pause incoming writes, wait for a replica to catch up, then orchestrate the failover.
Awesome! A fun fact about Valkey, is that the name originally came from ValkyrieDB. We change it because only about half of the folks on my team were able to spell Valkyrie.
A lot of people like `valkey-extended`. We are noodling between that valkey-bundle now. Thanks for the input!
So the #Valkey project is working on a packaged version of Valkey along with popular extensions (like LDAP authentication and vector similarity search). The plan is to call it `valkey-extensions`, but that the name might imply it's just extensions and not the core. Folks have better ideas/thoughts?
It's a bit last minute, but if anyone is interested in a live stream about tuning Valkey to run efficiently on multiple cores, we'll be hosting it in a couple of hours www.linkedin.com/events/73360.... There will be a recording as well.
Redis relicensed (under AGPL, which many cannot use). Valkey project, a Redis fork and still under an #opensource BSD license, continues to thrive in adoption and community at @linuxfoundation.org. Read more about what's next for Valkey:
www.theregister.com/2025/05/15/a... @valkeyio.bsky.social
Just read a great blog by a friend about how to properly measure the performance of Valkey and Redis. When evaluating high performance systems, there are a lot of parameters which can affect the total throughput and it's easy to come up with "synthetic numbers". www.gomomento.com/blog/valkey-...
What if we could pick up a k8s cluster and move it w/out service disruption?
We demo’d this at Kubecon NA 2024 by moving a @kubernetes.io cluster running Valkey from AWS to Azure and then GCP all without disruption! this is pure magic from @felicitas.pojtinger.com .
loopholelabs.io/blog/zero-do...
We have three new releases of #Valkey which include security fixes. Please consider upgrading if you have a publicly accessible Valkey instance or apply a mitigation to the CVE.
- github.com/valkey-io/va...
- github.com/valkey-io/va...
- github.com/valkey-io/va...
Photo of Madelyn Olson on stage at Monki Gras in front of a slide showing the quote “"We optimize for joy. We believe writing code is a lot of hard work, and the only way it can be worth is by enjoying it. When there is no longer joy in writing code, the best thing to do is stop. To prevent this, we'll avoid taking paths that will make Redis less of a joy to develop." by antirez
It’s @reconditerose.bsky.social quoting antirez at #MonkiGras on the importance of optimising for joy!
"[..] We believe writing code is a lot of hard work, and the only way it can be worth is by enjoying it. When there is no longer joy in writing code, the best thing to do is stop. [..]”
Haha, oops. I'm a contributor to Valkey, so I've worked on it :). I know many obscure things about the engine.