AnimateLCM is a deep learning-based model for generating animation videos. It is capable of producing high-fidelity animation videos with minimal sampling steps. Unlike direct consistency learning from the original video dataset, AnimateLCM adopts a decoupled consistency learning strategy, decouples the extraction of image generation priors and motion generation priors, thereby enhancing training efficiency and the visual quality of the generated animations. Furthermore, AnimateLCM can be integrated with plugins from the Stable Diffusion community to achieve various controllable generation features. AnimateLCM has demonstrated its performance in image-based video generation and layout-based video generation.