Since the public release of ChatGPT, its impact on the education sector has been profound and concerning. An increasing number of students are using generative artificial intelligence to complete assignments and exams, submitting works that are perceived as genuine academic achievements. This phenomenon not only devalues high school and college degrees but may also lead to many students who have not truly learned entering critical professional fields such as nursing, engineering, and firefighting, which could have severe consequences for society.
Image Source Note: Image generated by AI, image licensed by Midjourney
However, most schools and educational institutions have not given adequate attention to combating AI academic fraud. More shockingly, some schools have even actively relaxed restrictions on AI use, allowing students to use AI tools while prohibiting technologies that can detect AI-generated assignments. This misguided decision significantly undermines teachers' ability to supervise.
Recently, research from the University of Reading in the UK indicated that teachers are nearly unable to identify AI-generated academic work. The research team submitted AI-generated assignments under false student identities and found that 94% of the submissions went undetected. Even with stricter detection standards, 97% of AI submissions were still not flagged as potentially AI-generated. This suggests that even under ideal circumstances, teachers have a very low recognition rate for AI works.
This is not the first warning of its kind. Previous research from the University of South Florida indicated that linguistics experts could not distinguish between AI-generated text and human-written text. Another study from Vietnam showed that AI detection systems could effectively identify AI text, while human teachers' recognition abilities were far inferior.
Additionally, recent studies have found that AI-generated assignments often score higher than those of real students. Research shows that in 83.4% of cases, AI-submitted assignments received higher scores than randomly selected assignments from human students. This means that students using basic AI tools are more likely to have higher scores than those who diligently complete their assignments.
In real classrooms, although detection systems may flag AI assignments, professors often take a cautious approach to reporting academic integrity violations, and many schools lack sufficient penalties for offenders. In summary, if schools do not use AI detection technologies, students who cheat using AI can almost effortlessly achieve higher scores without fear of being caught.
Currently, the online course environment complicates this issue further, as teachers cannot truly know the identity of their students, increasing the likelihood of cheating. While schools could address this issue through exam monitoring or using modified writing environments, many are unwilling to invest the time and resources to implement these measures. As a result, the phenomenon of academic fraud is becoming increasingly severe, and effective responses to this issue remain insufficient.
Key Points:
📚 94% of AI-generated college papers went undetected by teachers, threatening academic integrity.
🚫 Most schools have not prioritized combating AI academic fraud and have even relaxed restrictions on AI use.
📊 AI-generated assignments generally score higher than real student assignments, with limited effectiveness of detection systems.