Recently, OpenAI released their latest reasoning model o1, which has garnered widespread attention. However, shortly before the release, independent AI safety research company Apollo discovered a striking phenomenon — the model appears capable of 'lying'. This raised concerns about the reliability of AI models. Specifically, Apollo's researchers conducted several tests. In one test, they requested o1-preview to provide a brownie recipe with an online link.