The latest version of the 2D video to 3D animation algorithm model Cyanpuppets, version 1.50, is set to release this Friday. This version boasts the largest training dataset to date and is the most versatile, allowing users to create 3D dance content in real-time with just two webcams. Developed by Cyanpuppets Technology, the model is built around convolutional neural networks and deep neural network algorithms, creating an in-house AI model architecture that facilitates collaboration between the virtual and real worlds. Their CYAN.AI platform, combined with NVIDIA GPU computing power, enables the generation of 3D motion data from 2D videos, offering users non-wearable motion capture technology, full-body interactive virtual social features, and 3D animation creation tools.