The data to be translated: Google DeepMind released the SODA self-supervised diffusion model yesterday, which achieves precise control over diffusion models through unsupervised operations, separates style and content, and enables functionalities such as 3D views. SODA demonstrates powerful representation learning and generative capabilities, bringing new ideas and possibilities to the field of deep learning. The paper details the model architecture design, the mechanism for generating new perspectives, and optimization techniques. In the future, this method may be applied to dynamic combinatorial scenes, providing new ideas for the development of the AI field.