Recent studies have revealed that large language models are influenced by the presentation order of premise information in logical reasoning tasks, and disorderly sequences may lead to a decrease in performance. Researchers from Google DeepMind and Stanford have pointed out that logically natural order of premises can enhance model performance. For models like LLM, altering the order of premises leads to a decline in performance and requires further research to address. The order of premises has a significant impact on the reasoning performance of large language models and remains a challenge. Models such as Gemini and GPT-4 have significant flaws, resulting in a severe drop in LLM performance.