Recently, researchers from Google's research team and the University of Illinois at Urbana-Champaign published a study titled "3D Relighting without Inverse Rendering." This research explores a method for view synthesis that can recover 3D representations from a set of object images under unknown lighting conditions to render relightable views under target lighting from new viewpoints.
Traditional methods rely on inverse rendering, attempting to separate the geometry, material, and lighting that interpret the input images. However, this often involves optimization through differentiable Monte Carlo rendering, which is fragile and computationally expensive.
Thus, the research team proposed a simpler approach, IllumiNeRF: Initially, they use an image diffusion model conditioned on lighting to relight each input image, then reconstruct a neural radiance field (NeRF) using these relit images to render new perspectives under target lighting. They demonstrated that this strategy is surprisingly competitive and achieved state-of-the-art results in multiple relighting benchmarks.
Its specific working principle is as follows:
Given a set of images and camera poses in (a), researchers run NeRF to extract 3D geometry, as shown in (b);
Based on this geometry and the target light shown in (c), radiation cues are created for each given input view, as shown in (d);
Next, using the relighting diffusion model shown in (e) and the sample S of possible solutions for each given image displayed in (f), each input image is independently relit;
Finally, the set of edited images is refined into a 3D representation through latent NeRF optimization, as shown in (g) and (h).
3D Consistent Relighting
The first row shows the renderings of the final latent NeRF;
The second row displays the diffusion samples of the nearest training views corresponding to each rendering frame at the top.
This product can be applied in computer graphics, augmented reality, and virtual reality fields. For instance, in film production, it can be used to render 3D scenes under different lighting conditions, saving shooting costs and time. In virtual reality applications, users can experience virtual scenes in various lighting environments, enhancing realism and immersion. Additionally, the product can be used in digital art creation and architectural design, providing users with more flexible lighting representation and rendering effects.
Product Entry: https://top.aibase.com/tool/illuminerf