The team from Nanyang Technological University has launched the InsActor framework, which utilizes a diffusion-based human motion model to generate realistic physical animations driven by instructions. InsActor achieves conditional motion planning through a diffusion strategy, capturing the complex relationship between high-level human instructions and character actions. Experiments demonstrate that InsActor achieves advanced results in instruction-driven motion generation and waypoint-oriented tasks. The framework's flexibility allows for customizable animations, showcasing wide applicability while maintaining visual appeal and physical feasibility.