Ahead of this year's Two Sessions in China, Zhou Hongyi, a national committee member of the Chinese People's Political Consultative Conference (CPPCC) and founder of 360 Group, shared his views on the DeepSeek large language model and AI security, emphasizing the importance of a balanced understanding of AI safety – neither exaggerating nor neglecting its risks.
Zhou pointed out a tendency to overhype AI security concerns. He criticized the five leading US AI companies, exemplified by OpenAI, for using the perceived insecurity of AI to justify their monopolistic and closed-source strategies, thereby encouraging government regulation and hindering competitors. He suggested that discussions of AI security in this context are disingenuous, asserting that "lack of development is the biggest insecurity." He believes that seizing the opportunities of the AI industrial revolution, boosting productivity, and achieving technological inclusiveness should be the top priorities.
Image Source: AI-generated image, licensed through Midjourney
Regarding the issue of AI "hallucinations," Zhou offered a unique perspective. He argued that "hallucinations" are not purely security risks, but rather a manifestation of the model's intelligence and creativity. A model without "hallucinations" lacks imagination, and these "hallucinations" are a characteristic of AI exhibiting human-like intelligence.
Using DeepSeek as an example, he highlighted its significant "hallucinations," which users perceive as a form of human-like creative ability. He advocated for the simultaneous advancement of AI security and business development, suggesting that specific problems like "hallucinations" should be broken down into solvable technical challenges rather than broadly categorized as security threats. He called for a rational understanding of AI characteristics, the development of targeted solutions, and the promotion of technological progress.