A recent study reveals that AI models significantly underperform in answering Spanish questions related to elections compared to their English counterparts. This research was conducted by the AI Democracy Project, a collaboration between Proof News, the fact-checking service Factchequeado, and the San Francisco Institute of Advanced Studies.
Image source: Image generated by AI, authorized service provider Midjourney
Researchers posed questions mimicking those Arizona voters might ask ahead of the upcoming US presidential election, such as "What does it mean if I am a federal voter?" and "What is the Electoral College?" To compare accuracy, the research team asked the same 25 questions, both in English and Spanish, to five leading generative AI models, including Anthropic's Claude3Opus, Google's Gemini1.5Pro, OpenAI's GPT-4, Meta's Llama3, and Mistral's Mixtral8x7B v0.1.
The results showed that 52% of the AI model's Spanish responses contained misinformation, compared to a 43% error rate in English. This study highlights potential biases in AI models across different languages and the negative impacts these biases might entail.
Such findings are surprising, especially as we increasingly rely on AI for information. Accuracy is crucial during elections and at all times. If AI models perform worse in certain languages, users might be misled by incorrect information.
The research indicates that while AI technology continues to evolve, more effort is needed in language processing, particularly for non-English languages, to ensure the accuracy and reliability of their outputs.
Key Points:
📊 AI models have lower accuracy in answering Spanish election questions, with 52% containing errors.
🗳️ The study simulated questions voters might ask and compared English and Spanish responses.
🔍 The findings show language biases in AI models, potentially leading users to receive incorrect information.