YaFSDP is a distributed data parallelism framework designed to work well with transformer-like neural network architectures. It is 20% faster than the traditional FSDP when pre-training large language models (LLMs) and performs better under high-memory pressure conditions. YaFSDP aims to reduce the overhead of communication and memory operations.