In the rapidly evolving world of artificial intelligence, generative AI has brought numerous conveniences, but the proliferation of misinformation it generates is an issue that cannot be ignored. In response to this challenge, tech giant Microsoft recently introduced a new tool called "Correction," designed to rectify false information in AI-generated content.

As part of Microsoft's Azure AI Content Safety API, "Correction" is currently in preview. The tool can automatically flag text that may contain errors, such as incorrect summaries of company quarterly reports, and compare it with credible sources to correct these inaccuracies. Notably, this technology is applicable to all AI models that generate text, including Meta's Llama and OpenAI's GPT-4.

A Microsoft spokesperson stated that "Correction" ensures the consistency of generated content with real documents by combining small and large language models. They hope this new feature can help developers in fields like healthcare improve the accuracy of their responses.

image.png

However, experts remain cautious. Os Keyes, a Ph.D. student at the University of Washington, believes that trying to eliminate the illusion created by generative AI is like trying to remove hydrogen from water—it's a fundamental part of how the technology operates. In fact, text generation models produce false information because they don't "know" anything; they merely guess based on training sets. A study found that OpenAI's ChatGPT had an error rate as high as 50% when answering medical questions.

Microsoft's solution involves using a pair of cross-referenced "editor" meta-models to identify and correct these false information. A classification model is responsible for finding potential errors, fabrications, or irrelevant text fragments, and if such issues are detected, a second language model is introduced to attempt correction based on specific "foundation documents."

Although Microsoft claims that "Correction" can significantly enhance the reliability and credibility of AI-generated content, experts still have reservations. Mike Cook, a researcher at Queen Mary University, points out that even if "Correction" works as advertised, it could exacerbate the issues of trust and interpretability in AI. The service might give users a false sense of security, leading them to believe the model's accuracy is higher than it actually is.

It's worth noting that Microsoft has also hidden commercial calculations in the launch of "Correction." While the feature itself is free, the "foundation document detection" required for detecting false information has a monthly usage limit, with additional usage incurring charges.

Microsoft is clearly under pressure to prove the value of its AI investments. In the second quarter of this year, the company spent nearly $19 billion on capital expenditures and equipment related to AI, but so far, it has not generated much revenue from AI. Recently, some Wall Street analysts downgraded Microsoft's stock rating, questioning the feasibility of its long-term AI strategy.

Accuracy and the potential risks of misinformation have become one of the biggest concerns for businesses when piloting AI tools. Cook concludes that if this were a normal product lifecycle, generative AI should still be in the academic research and development phase, continuing to improve and understand its strengths and weaknesses. However, we have already deployed it across multiple industries.

Microsoft's "Correction" tool is undoubtedly an attempt to address the issue of AI-generated misinformation, but whether it can truly resolve the crisis of trust in generative AI remains to be seen. As AI technology advances rapidly, balancing innovation with risk will be an important challenge for the entire industry.