`s2 apply`: s2.dev/docs/cli/app...
There's two new ways to manage S2 as code! The Terraform provider lets you manage resources with full lifecycle support. Available on the Terraform Registry at s2-streamstore/s2.
Or you can use the CLI by `s2 apply` which takes a declarative JSON spec and creates/reconfigures specified resources.
The s2.dev dashboard now has a Data Plane studio to make it convenient to work with streams. You can:
-Do one-shot reads, live tail from any stream, start from an offset, seq no., or timestamp
-Append records in multiple modes
-Run stream commands
-Inspect records with command record aware rendering
The S2 cloud service is now GA!
We also raised a $3.85M seed round led by Accel with participation from Y Combinator, and other funds and angels like @t3.gg @chris.blue @fulmicoton.bsky.social
s2.dev/blog/ga
It was fun to run multi-agent research cohorts and have them connect over durable streams, making infinitely many reasoning topologies possible!
s2 cell architecture diagram
Curious about the architecture of S2? This is how we cook infinite durable streams for you.
just got finished no-lifing on building a TUI for s2, wdyt?
s2-lite is here – an open source @s2.dev Stream Store! It's a single binary you can run anywhere. Powered by SlateDB, so you can point it at an object storage bucket for durable streams with real-time reads. github.com/s2-streamsto...
Thankful to the @s2.dev team for all the progress we made this year! Really excited for 2026.
Some highlights 🧵
had a fun time chasing an issue that adding RUST_BACKTRACE=full just OOMKilled one of our pods
just tried out the real-time TodoMVC example with @livestore.dev + @s2.dev as the durable sync provider, and honestly its crazy how fast it is!
local SQLite, works offline, real-time updates and no complex setup, just works
I wrote about it: s2.dev/blog/durable...
Try it here: s2.dev/demos/y-s2?r...
Excited to share y-s2, an open-source serverless backend for real-time collaborative applications using Yjs and @s2.dev!
After a bout of hallucination, I had GPT5 generate some backronyms for itself:
- Generally Pretends Truthfully
- Gullible People’s Trick
- Greatly Pretends Things
- Generated, Probably Twisted
- Guessing, Passing, Tricking
- Good at Pretending Truths
- Generally Produces Tall-tales
I wrote a bit about how we verify linearizability of @s2.dev using Porcupine! s2.dev/blog/lineari...
s2.dev got a refresh ✨
Our Star Wars quickstart got good feedback, and now you can view the stream from the dashboard too 🫣
This comes in real handy for live debugging and observability — especially in systems built around agentic workflows, microservices, or event sourcing using @s2.dev
Music credits: www.youtube.com/watch?v=Ln7q...
IRC in 23 lines of bash with S2 by Stephen Balogh: gist.github.com/infiniteregr...
@s2.dev can now automatically delete streams where all records have fallen out of retention or it was never written to, based on a configurable time threshold.
The option is called "delete-on-empty" and you can set it up on the default stream configuration for your basin from the dashboard or CLI!
screenshot of blog post titled `Stream per agent session`
Agents need granular streams. And yes, @s2.dev fits the bill nicely!
s2.dev/blog/agent-s...
Super cool XTDB plugin from @chuck.cassel.dev, which implements the write-ahead log using @s2.dev streams: github.com/chucklehead-...
Change data capture from Postgres is simple with the right tools. We collaborated with sequinstream.com on an integration so you can use @s2.dev to ship real-time features faster!
Had a lot of fun with this 😄 – a multiplayer, "instant re-playable" pseudoterminal that uses @s2.dev streams as a transport instead of SSH: s2.dev/blog/s2-term
You can now safely share @s2.dev streams directly with end clients like browsers, apps, or agents! No proxying required. Check it out, s2.dev/blog/access-...
Kind words from Chris about S2! I felt strongly that we needed to not hitch our wagon to Kafka. It is not even a priority for us, for now. This may seem like a strange analogy... but if Kafka is OLAP, we want to be OLTP. See this demo to understand what I mean: s2.dev/docs/integra...