Translated data: A study from the University of California, San Diego found that GPT-4's success rate in the Turing test exceeded 41%, surpassing ELIZA. ELIZA, with a success rate of 27%, simulated human responses, while GPT-3.5 only managed 14%, leading to awkward evaluations. The research indicates that ChatGPT was not specifically designed for the Turing test, and GPT-4's performance in the test was more powerful, drawing industry attention.