Translated data: The team led by Ma Yi has introduced a white-box Transformer architecture called CRATE in their latest research. This structure effectively represents high-dimensional data through compression, addressing the security issues of large models. The study suggests that the essence of deep learning may be compression, and CRATE has demonstrated enhanced interpretability in experiments, bringing a new paradigm to the field of deep learning.