On the rendering side haha, not considering the simulation part ^^'
On the rendering side haha, not considering the simulation part ^^'
That is probably the hardest part of it all haha
But yes I think this is a good approach to start from "highest-quality" and then down.
I am gathering/cleaning some notes on the subject about a "kind of similar" approach and I'll try to post them...
Yes I do agree that this is really content-dependent (ideally we would like to sample at Nyquist frequency).
2-px per froxel cone w/ 256 depth slices is already more than 530M at 4k, 235M+ at 1440p (without occlusion culling)π₯²...
You are saying that you oversample laterally near the camera but you don't actually go pixel-size right? Do you use tiles of NxN full-res pixels (N =4? 8?)
Alright thanks!
So you actually render a **lot** of froxels compared to a traditional game engine implementation! But it shows since the resulting visual quality is much higher.
And still 1870* slices
This is quite a lot of slices, however the range is also big and maybe higher than a typical use case for Embergen... But let's say we want voxels of 1cm per side, this is still a crossover depth of 18.7m et 187 uniform slices.
Am I mistaken? 3/3
for a vertical FoV of 60Β° and a viewport height of 2160 pixels, if we want to resolve voxels of 10cm per side, we have a crossover depth of about 187m resulting in 1870 slices in the uniform range (www.desmos.com/calculator/k...). 2/3
Really enlightening explanations, thanks!
However there is something that feels a bit off to me, maybe I did the calculations wrong but let's go through an example: 1/3
I wrote a followup post about memory arena and containers: tcantenot.github.io/posts/memory...
I wrote a blog post about some thoughts and experiments about the usage of memory arenas in C++: tcantenot.github.io/posts/memory...