Recently, Microsoft filed a patent application aimed at reducing or eliminating false information generated by artificial intelligence through a technical approach. The patent, titled "Utilizing External Knowledge and Feedback for Interaction with Language Models," was submitted to the United States Patent and Trademark Office (USPTO) last year and was made public on October 31.
The crux of this proposal is to provide AI models with a "Response Enhancement System" (RAS), enabling them to automatically extract more information based on user queries and check the "validity" of their responses.
Specifically, the Response Enhancement System can identify whether information from "external sources" can better answer the user's question. If the AI's response does not include this information, the system deems it less useful. Additionally, RAS can alert users to any shortcomings in its response, and users can provide feedback. The advantage of this scheme is that it does not require developers or companies to make detailed adjustments to existing models.
Currently, the USPTO website does not display a patent number for this application, indicating that the patent is still under review. We have contacted Microsoft for more information, including whether this patent is related to the previously announced Azure AI Content Safety Tool, which provides AI-driven verification for enterprise AI chatbots, fact-checking in the background, and determining whether AI responses are "groundless" or "founded," providing only answers supported by actual data before presenting them to users.
The AI hallucination problem is one of the biggest challenges faced by generative AI, seriously affecting the credibility of AI chatbots. In this regard, both Google and Twitter's AI systems have produced notable errors, such as suggesting users smear glue on pizzas or eat stones, and even spreading false election information. Tim Cook, CEO of Apple, also acknowledged that Apple's intelligent technology is not immune to hallucination issues. Recently, OpenAI's "Whisper" audio transcription tool has also been found to frequently exhibit hallucinations, drawing attention to its use in American hospitals.
Despite the prominence of the AI hallucination problem, the demand for AI data centers among tech giants remains strong. Companies including Google, Microsoft, and Meta are considering nuclear energy as a potential solution to meet the high energy demands of AI.
Key Points:
🔍 Microsoft has applied for a new patent aimed at reducing AI-generated false information.
🤖 The core of the patent is to introduce a Response Enhancement System for AI models, automatically extracting more information.
⚡ Despite the serious AI hallucination problem, tech companies' demand for AI data centers remains robust.