Let's do this.
www.youtube.com/watch?v=KtQ9...
Let's do this.
www.youtube.com/watch?v=KtQ9...
Do you know Leptos? It uses wasm rather than JS as a target, but amazing for doing front-end dev in Rust. I've used it for some time, and except for some incompatibilities between major version upgrades, I haven't seen any bugs.
That sounds like something that should go in the stdlib at some point.
This is technically not even possible I think. The function that `sys.settrace` calls can't yield back to the currently running event loop, because this function is not async itself. The event loop is frozen while we are on a breakpoint. And asyncio event loops are not reentrant.
Tmux should soon support automatic light/dark mode (following the system theme) when used in Ghostty. If you're a Ghostty+tmux+neovim user, please reach out or try this PR: github.com/tmux/tmux/pu...
Sure! Thanks!
I couldn't help but notice that at least in my currently installed neovim version (2nd December 2024), this built-in terminal emulator doesn't propagate DEC mode 2031 to nested nvim instances. Given that you were working on mode 2031, do you think this is something that will be fixed.
Wow, I wish you all the best!
Have you used the datetype library? I believe this provides a type-safe compatible wrapper around datetime that prevents accidentally mixing timezone-aware and timezone-naive timestamps.
`set(s) - {"0", "1"} == set()`
Maybe later ;)
@neovim.io : Wezterm issue: github.com/wez/wezterm/... Tmux issue: github.com/tmux/tmux/is...
This is perfect! Now we need to get everyone (Wezterm,Alacritty,tmux,...) to support this. It's so little effort for such a big gain.
@willmcgugan.bsky.social : Do you know about this capability?
Does this work in tmux (or any other terminal multiplexer)? As a developer of a TUI library I'm also interested in any docs around this DEC mode.
This is really impressive what you have here.
True, but I think the main issue is that by applying structured concurrency to threads (so no callbacks), we'll end up with many more (often short lived) threads than typical threading applications. OS threads probably have too much overhead here.
Yes, true, but (1) they are always shielded from cancellation and (2) I believe `from_thread` blocks the underlying *OS* thread. This can exhaust the thread pool. And less of an issue (3) if you call a sync library that does IO in `to_thread`, this library won't use `from_thread` to schedule its IO.
I still have to read about project Loom, but I guess there will be options. Like allowing cancellation to happen at *every* Python instruction; at the start of a function; at explicit cancellation points; or having code entirely shielded from cancellation. The latter probably being the default.
Yes, so do I. But there's a strong point to be made that having this behavior prevents asyncio from scaling over multiple threads (like e.g., Tokio in Rust). Maybe asyncio will remain single-threaded, and if we want multi-core (structured) concurrency, micro threads will be the way to go.
Also having "await" be both a cancellation point as well as a context switch is somewhat arbitrary. There are many situations where you want to have one without the other. I think these are distinct concepts and it only makes sense to be explicit about the cancellation points. (cc: @mitsuhiko.at )
Once we get rid of the GIL, concurrent applications should be able to leverage multiple CPUs. Relying on code between two "await"-points being atomic no longer works and the only thing "await" signals is a context switch for I/O, which is something the interpreter already knows without the "await".
Yes, because function coloring is no longer an issue. Think about it the other way around, what does "await" really bring? And those things, why would they not be possible with "normal" Python threads? (Not OS-level threads, but threads as they could be exposed through the Python interpreter.)
I imagine this would be implemented by having a user provide explicit cancellation points. E.g., a call to `threading.cancellation_point()` which will raise a `CancelledError` or something like that.
This is worth a read. I can see a future where CPython has an event loop within the interpreter, and where structured concurrency and cancellation become available for normal Python threads making the async/await keywords redundant.
Not completely a noop for backward compatibility I think. "await" needs to become a cancellation point, and a lock should be acquired between two "await" points because people rely on that being atomic.
The more I think about it, the more it makes sense and the more excited I become! (Saying that as someone who's been using asyncio since it was called Tulip) I really hope someone has the energy to make it happen.
I really love the idea of having all structured concurrency primitives available for threads. However, I don't see this as feasible, unless CPython itself would include an event loop as part of the interpreter. That's a huge change. Cancellation for threads is also different from what asyncio does.