Sketch2NeRF

Text-to-3D Generation Guided by Multi-view Sketches

CommonProductImageText-to-3D GenerationSketch Control
Sketch2NeRF is a text-to-3D generation framework guided by multi-view sketches. It leverages pre-trained 2D diffusion models (such as Stable Diffusion and ControlNet) to optimize 3D scenes represented by neural radiance fields (NeRF). The method also proposes a novel synchronized generation and reconstruction approach to effectively optimize NeRF. Experiments based on two collected multi-view sketch datasets demonstrate that our method can synthesize consistent 3D content with fine-grained sketch control under high-fidelity text prompts. Extensive results show that our method achieves state-of-the-art performance in terms of sketch similarity and text alignment.
Visit

Sketch2NeRF Visit Over Time

Monthly Visits

21315886

Bounce Rate

45.50%

Page per Visit

5.2

Visit Duration

00:05:02

Sketch2NeRF Visit Trend

Sketch2NeRF Visit Geography

Sketch2NeRF Traffic Sources

Sketch2NeRF Alternatives