Note that I don't mind, really, I have plenty. It's just something I noticed.
Note that I don't mind, really, I have plenty. It's just something I noticed.
Don't know, but it's significantly larger than any VS2022 solution such as Live++.
Live++ has >30 projects, a lot more code and also uses VA. Needs about 2 gigs of RAM.
I've read that RAM usage should actually be less in 2026, but that's not the case for me.
Love the much improved performance, good looking UI, but have to say it's not easy on RAM.
On my hobby solution with a single project and 66 translation units, it regularly eats ~4GB of RAM. Though maybe that's actually caused by Visual Assist...
Hot take:
If you're writing ImGui-style code in C++ without any kind of code hot-reload, you're holding your computer wrong.
New blog post: Phase 1: Build information
liveplusplus.tech/blog/posts/2...
#cpp
Iteration is still never as fast as with hot reload.
If you've never experienced it yourself, you don't know how transformative it is.
Time to first image on screen: <5ms.
Time to main menu with D3D rendering and a few resources: 1-2s.
The absurd thing is: I worked on optimizing time-to-first present in my own engine today, and it's now <5ms.
For me, even D3D init is too slow (~200ms), so I'm doing software "rendering" and blitting to the Win32 HWND with a loading bar while everything is initialized in main().
Bliss.
I'm trying to envision how many types you would need for something like registration to take almost 10s in an optimized build.
Are you running that on a GameBoy by any chance?
Way, way too high, what is static initialization doing? Sweet jesus, people and their singletons, automatic registration factories and generally globals that need a constructor. That stuff needs to die, fast. Init everything in main, be done!
You can quite literally load 5GB of data in 1-2 seconds.
No no no, Sony shuts down Bluepoint? I'm sorry for everybody at Bluepoint, that really sucks.
Guess it's that time of year again until end of March, probably lots of news like this to come :(.
www.bloomberg.com/news/article...
We've just released a new Insider update with some much-requested features, like being able to specify env vars when running, auth support for symbol servers, and proper progress reporting for symbol downloads. And of course, many fixes & QoL improvements.
Go check it out!
Screenshot of what's apparently supposed to be a "git flow" chart from Microsoft learning materials. It's full of bizarre spelling errors and fucked up diagram elements that make it pretty clear the thing was AI generated.
oh.... my god??
actual chart from learn.microsoft.com/en-us/traini... btw
AI slop paving its way through codebases like the cancer it is.
Technically possible, but a tremendous amount of work. Instruction-level tracing required for TTD generates gigabytes in mere seconds, and required a full working CPU emulator to replay.
Not 100% the same, but almost.
If the platform supports dumping user-mode *and* kernel state, then yes, this would work.
It's just not possible to alter/manipulate kernel state during a replay to rewind to an earlier point in time.
This is a trade-off between low-overhead, always-on recording & replay across all cores (= Echo) vs. instruction-level tracing and single-core emulation like TTD.
While the latter can detect these races, it has an incredibly high overhead, trace size, and can't replay kernel state (= audio & GPU).
If your process has a true race, which *does* lead to divergent behaviour, Echo will detect and report this, but cannot pinpoint exactly where that happens - just that there's a race, somewhere.
Generally speaking, sync. primitives will replay in the exact same order. So if during recording thread 2 looked mutex C first, then the same will happen during replay.
If your process has a benign race somewhere, which does not lead to divergent behaviour, then Echo won't be able to tell you.
This can't rule out that I'm doing something stupid somewhere, but should give good test coverage.
It's also the reason why I'm starting with PS5, and not Windows, because the API surface is known and limited.
Absolutely, people need to trust this thing.
I'm developing everything with a complete test-suite in the background, which automatically verifies each and every API interaction, checking all data, return values, etc. against their recorded counterpart.
Unfortunately, not with this approach. It replays everything using real APIs, which manipulate kernel state, so that part can never be rewound.
Thanks Alex, truly means a lot!
Reposting this for the European crowd.
Please repost and share among friends, peers, coworkers - I'm trying to gauge if this has merit to be continued or not.
1) Rewind is not possible with that approach, unfortunately.
2) Yes, assuming that deterministic CPU code will produce the exact same data streams and commands for the GPU to execute.
Thanks Arseny!
And you're correct on all accounts, you clearly understood what it is!
Yes, this captures everything that's non-deterministic, including network.
In a replay, the data (e.g. from a network request) will be there at the same time in the same frame as during the recording.
Thanks!
Still a long way to go until I can easily record an UE5 game. Which kind of reminds me of how Live++ started :).
All correct.
Oh, and the 27MB contain the raw PCM data for the music. If you leave that out, it's much much smaller, in the high KB range for that recording (uncompressed).