The data to be translated: Chen Danqi's team has made a breakthrough in high performance and low cost with their latest research and development, the LLM-Shearing large model pruning method. This method efficiently prunes the massive pre-trained model with only 5% of the cost, while maintaining state-of-the-art performance levels. The research team has also addressed the issue of potential performance decline caused by pruning, proposing a dynamic batch loading method. This innovation is set to have a wide-ranging impact on large-scale deep learning models.