Translated Data: DeepMind's latest research reveals that language models still face challenges in logical reasoning. Studies show that the order of premises in a task significantly impacts the logical reasoning performance of language models. This finding could guide experts in making decisions when using language models for basic reasoning tasks. Altering the order of premises might be a simple and effective way to enhance the reasoning capabilities of language models.