The emergence of large language models and advanced prompting strategies marks a significant progress in language model research. Chain of Thought prompting techniques demonstrate excellent performance in multi-step problem solving, especially in cross-domain, long generalization, and cross-lingual tasks. Researchers have explored the relationship between the length of reasoning steps and accuracy, revealing the critical role of Chain of Thought length in the performance of these models. Experimental results indicate a noticeable correlation between the length of reasoning chains and the capabilities of large language models within a certain range. The article discusses experiments in zero-shot and few-shot Chain of Thought scenarios, showcasing the optimization of complexity.