According to internal communications obtained by TechCrunch, Google is having contractors compare its Gemini AI with Anthropic's Claude, raising compliance concerns.

Documents show that contractors responsible for improving Gemini are required to evaluate the quality of responses from both Gemini and Claude within 30 minutes based on multiple criteria such as accuracy and detail. Recently, contractors found content explicitly mentioning Claude on Google's internal evaluation platform, including the phrase "I am Claude, created by Anthropic."

Internal discussions reveal that contractors noted Claude's performance in safety aspects was more stringent. One contractor stated, "Claude's safety settings are the strictest among all AI models." In certain cases, when Gemini's responses were flagged as "serious safety violations" for involving "nudity and bondage," Claude simply refused to respond to related prompts.

Claude2, Anthropic, Artificial Intelligence, Chatbot Claude

It is noteworthy that Google's actions, as a major investor in Anthropic, may violate Anthropic's terms of service, which explicitly prohibit unauthorized access to Claude for "building competing products" or "training competing AI models." When asked whether they had obtained authorization from Anthropic, Google DeepMind spokesperson Shira McNamara declined to respond directly.

McNamara stated that while DeepMind does "compare model outputs" for evaluation purposes, it denies using Anthropic models to train Gemini. "This is in line with industry standard practices," she said, "but any claims that we trained Gemini using Anthropic models are inaccurate."

Previously, Google had requested AI product contractors to assess Gemini's responses outside its areas of expertise, raising concerns among contractors about the potential for AI to provide inaccurate information in sensitive fields like healthcare.

As of the time of publication, a spokesperson for Anthropic had not commented on the matter.