DUSt3R
Dense 3D reconstruction without camera calibration information
CommonProductImage3D ReconstructionComputer Vision
DUSt3R is a novel dense and unconstrained stereo 3D reconstruction method applicable to any image set. It does not require prior knowledge of camera calibration or viewpoint pose information. By treating the pairwise reconstruction problem as a point cloud regression, DUSt3R relaxes the strict constraints of traditional projective camera models. DUSt3R provides a unified approach for both monocular and binocular reconstruction and proposes a simple and effective global alignment strategy for multi-image cases. The network architecture is built based on standard Transformer encoder and decoder, leveraging the power of pre-trained models. DUSt3R directly provides the 3D model and depth information of the scene and can recover pixel-wise matches, relative and absolute camera information from it.