IntrinsicAnything is an advanced image inverse rendering technology that optimizes the material recovery process through the learning of diffusion models, addressing the issue of object material recovery in images captured under unknown static lighting conditions. This technology learns material priors through generative models, decomposing the rendering equation into diffuse and specular reflection terms, and effectively resolving ambiguity issues in the inverse rendering process by training with a rich dataset of existing 3D objects. Additionally, this technology has developed a coarse-to-fine training strategy, using estimated materials to guide the diffusion model to produce multi-view consistency constraints, thereby achieving more stable and accurate results.