Saturday, December 30, 2023

Real-Time 3D Mapping

Kaiwen Song and Juyong Zhang of the University of Science and Technology of China claim to be "the first to achieve real-time rendering of large-scale scenes" through the use of neural rendering. They have presented their findings to the world in their paper City-on-Web: Real-time Neural Rendering of Large-scale Scenes on the Web (the website of which includes three live demos).

Neural Radiance Fields (NeRF) uses machine learning to generate photo-realistic 3D scenes from 2D images. NeRFs can therefore be used to create 3D models of real-world scenes, render novel views of a scene from any angle, and even generate synthetic 3D scenes for virtual reality applications. One of the problems with using NeRF however is the computation, memory and bandwidth resources needed to create large 3D scenes. Making it particularly difficult to render large scenes in real-time. 

City-on-Web claims to enable the real-time rendering of large 3D scenes by partitioning each scene into "manageable blocks, each with its own Level-of-Detail, ensuring high fidelity, efficient memory management and fast rendering". You can test these claims for yourself on three demo maps of three different scenes. Videoed highlights of each scene are also available if your computer struggles to render the 3D mapped scenes. 

Apparently the actual code for rendering large 3D scenes in real-time is 'coming soon'.

No comments: