Looking for fully configurable 3D street scene assets and real-time rendered videos? Our latest work generates physically-grounded 3D scenes ideal for robot learning & testing.
Check out our paper + interactive demo: light.princeton.edu/lsd-3d
Looking for fully configurable 3D street scene assets and real-time rendered videos? Our latest work generates physically-grounded 3D scenes ideal for robot learning & testing.
Check out our paper + interactive demo: light.princeton.edu/lsd-3d
Super excited to share our @natmachintell.nature.com paper (www.nature.com/articles/s42...), in which we recast and generalize 3D tracking as an inverse neural rendering task where we fit a scene graph to an image that best explains this image.
Project and Paper: light.princeton.edu/publication/...
A new AI-based inverse rendering method reconstructs 3D scene details from simulated images, offering improved generalization and transparency across diverse real-world datasets. doi.org/g9x2cv
Only a couple weeks left to submit a paper to Neural Fields Beyond Conventional Cameras at CVPR 2025! neural-bcc.github.io#call4paper
Our *non-archival* workshop welcomes both previously published and novel work. A great opportunity to get project feedback and connect with other researchers!
Congrats π₯³
Following an excellent debut at ECCV 2024, we're excited to announce the 2nd Workshop on Neural Fields Beyond Conventional Cameras at this CVPR 2025 in Nashville, Tennessee!
Workshop site: neural-bcc.github.io
If youβre going to flag a missing reference to an arXiv paper published after the #CVPR2025 submission deadline as a major weakness, could you also provide access to one of these? π
For other 3D vision newcomers to blue sky: I highly recommend joning @chrisoffner3d.bsky.social 's list to follow the right people ;): go.bsky.app/Cfm9XFe