Academic headlines report that researchers from Tsinghua University, TAL AI Lab, and Zhipu AI have proposed a 2-billion-parameter language model called MathGLM, aimed at exploring the efficiency of large language models in mathematical reasoning. The model employs a Transformer decoder architecture and has been trained on a large-scale arithmetic dataset, significantly enhancing its mathematical operation capabilities. Experimental results show that MathGLM achieves near 100% accuracy on a series of arithmetic tasks, outperforming GPT-4. Even with only 100 million parameters, MathGLM surpasses both GPT-4 and ChatGPT. The study also found that as the number of parameters increases, MathGLM's arithmetic operation abilities also improve. When handling complex mixed arithmetic operations with intricate number formats, MathGLM also outperforms GPT-4 and ChatGPT. This research indicates that under the condition of sufficient parameters and data volume, language models can accurately perform complex mathematical operations.