StructLDM is a structured latent diffusion model designed to learn 3D human generation from 2D images. It can generate diverse, viewpoint-consistent human figures and supports various levels of controllable generation and editing, such as combined generation and local clothing editing. The model enables garment-independent generation and editing without requiring clothing types or mask conditions. This project was proposed by Tao Hu, Fangzhou Hong, and Ziwei Liu from the S-Lab of Nanyang Technological University, with related research published at ECCV 2024.