OpenAI is grappling with a challenging issue: how to address the use of ChatGPT for cheating among students? Although the company has developed a reliable method to detect articles or research reports written by ChatGPT, this technology has not been publicly released due to widespread concerns about students using AI for cheating.

OpenAI has successfully developed a reliable technology to detect content generated by ChatGPT. This technology embeds a "watermark" in AI-generated text, achieving a detection accuracy rate of up to 99.9%. However, perplexingly, this technology, which could have addressed urgent needs, has not been released to the public. According to insiders, this project has been debated within OpenAI for nearly two years, and it was ready for release a year ago.

OpenAI, Artificial Intelligence, AI

The factors hindering the release of this technology are complex. Firstly, OpenAI faces a dilemma: should it uphold its commitment to transparency or maintain user loyalty? Internal surveys show that nearly one-third of ChatGPT's loyal users are against anti-cheating technology. This data undoubtedly puts significant pressure on the company's decision-making.

Secondly, OpenAI is concerned that this technology could disproportionately negatively impact certain groups, especially non-native English speakers. This concern reflects a core issue in AI ethics: how to ensure the fairness and inclusiveness of AI technology?

Meanwhile, the education sector's demand for this technology is becoming increasingly urgent. According to a survey by the Center for Democracy and Technology, 59% of middle and high school teachers are convinced that students are already using AI to complete assignments, a 17-point increase from the previous academic year. Educators urgently need tools to address this challenge and uphold academic integrity.

OpenAI's indecision has sparked internal disputes. Employees in favor of releasing the tool argue that compared to the significant social benefits this technology could bring, the company's concerns seem trivial. This viewpoint highlights the tension between technological development and social responsibility.

The technology itself also has potential issues. Although the detection accuracy is high, some employees are concerned that the watermark could be removed by simple technical means, such as through translation software or manual editing. This concern reflects the challenges AI technology faces in practical applications.

Additionally, controlling the scope of this technology's use is also a tricky issue. Too narrow a scope would reduce its utility, while too broad a scope could lead to the technology being cracked. This balance requires careful design and management.

It's worth noting that other tech giants are also active in this field. Google has developed a watermark tool called SynthID for detecting text generated by its Gemini AI, although it is still in the testing phase. This reflects the entire AI industry's emphasis on content authenticity verification.

OpenAI also prioritizes the development of audio and visual watermark technologies, especially during the U.S. election year. This decision highlights the broader social impacts that AI companies must consider in their technology development.

Reference: https://www.wsj.com/tech/ai/openai-tool-chatgpt-cheating-writing-135b755a?st=ejj4hy2haouysas&reflink=desktopwebshare_permalink