Translated data: Microsoft researchers have introduced a novel method for training tiny language models in a recent paper: using children's stories for training. Compared to training large-scale language models, this approach is faster and its internal mechanisms are easier to understand. Studies show that mini language models trained with children's stories can tell coherent, grammatically correct stories and perform exceptionally well. This method aids in analyzing the behavior of language models and also provides a research direction for training larger models. Researchers suggest that training mini language models is akin to sequencing the genome of a fruit fly rather than that of a human, serving as an effective way to explore language models.