Recently, an incident in a Facebook group for mushroom enthusiasts has once again raised concerns about the safety of AI applications. According to 404Media, an AI agent named "FungiFriend" infiltrated the "Northeast Mushroom Identification and Discussion" group, which has 13,000 members, and provided potentially lethal advice.
When asked how to cook the Sarcosphaera coronaria mushroom, which contains high levels of arsenic, FungiFriend not only incorrectly stated that the mushroom "is edible," but also detailed various cooking methods, including frying and stewing. In reality, this mushroom has been linked to fatal cases.
Rick Claypool, research director at the consumer safety organization Public Citizen, pointed out that using AI to automatically identify edible and toxic mushrooms is a "high-risk activity," and current AI systems are not capable of accurately performing this task.
This is not an isolated case. Over the past year, AI applications have repeatedly made serious mistakes in the field of food safety: some AI applications recommended making sandwiches containing mosquito repellent, one AI system provided a recipe that included chlorine, and there were even absurd suggestions to eat stones—Google's AI once claimed that "dogs exercise" and suggested making pizza with glue.
Despite the frequent errors of AI, American companies are rapidly advancing the adoption of AI customer service. This "speed over quality" approach reflects a troubling focus on cost savings at the expense of user safety. Experts are calling for caution in the use of AI technology in specific fields, especially those related to safety, to ensure the accuracy and reliability of information.