Concerns arise over Google’s Gemini AI system possibly using competitor Claude’s outputs in evaluations without permission, amid intensifying competition in the AI sector.
Contractors engaged in developing Google’s artificial intelligence system known as Gemini are reportedly conducting a comparative analysis between its outputs and those generated by Anthropic’s AI model, Claude. This information has emerged from internal communications, prompting inquiries into whether Google has secured permission to utilise Claude as part of the testing process for Gemini. However, responses from Google on this particular query remain unaddressed.
As competition intensifies within the AI sector, companies are increasingly focused on refining their models for superior efficiency and productivity. A key aspect of this competitive landscape involves evaluating AI outputs against those from rival systems to assess performance levels. However, the practice of using responses from competitors’ models for training or calibration raises ethical and contractual questions, particularly given that many organisations view their proprietary technologies as confidential.
According to reports from media outlets, contractors tasked with evaluating Gemini’s responses are analysing various criteria, including accuracy and detail. They are allocated up to 30 minutes per prompt to establish whether Gemini or Claude provides a more satisfying answer. Recently, these contractors have noted that references to Claude have begun appearing within the internal Google platform used for such comparisons, suggesting a direct interaction between the two models during evaluations.
In an internal discussion, these contractors highlighted a disparity in security priorities between the two systems, observing that Claude often refrained from answering prompts deemed unsafe, whereas Gemini’s approach appeared to differ. This finding underscores the differing methodologies adopted by competing AI models in addressing potentially hazardous queries.
The commercial terms governing Anthropic’s AI models explicitly prohibit using their technologies for the purpose of developing competing products or services, and any attempt to train rival AI systems is likewise banned without explicit authorisation from Anthropic. This regulatory backdrop is significant, especially considering that Google is a prominent investor in Anthropic, further complicating the relationship between the two entities.
Shira McNamara, a spokesperson for Google DeepMind, which oversees the Gemini project, refrained from commenting on whether Google has the necessary permissions to leverage Claude’s outputs. However, she clarified that while DeepMind conducts model comparisons, it does not incorporate Anthropic’s models into the training of Gemini. McNamara emphasised that any assertions suggesting a misuse of Claude’s systems in connection with Gemini are inaccurate.
This situation comes in the wake of the launch of Google’s Gemini 2.0 AI models, as the tech giant continues to navigate the rapidly evolving landscape of artificial intelligence. The implications of these developments, particularly regarding competitive practices and inter-company dynamics, are yet to be fully realised as industry stakeholders closely monitor the outcomes of such comparative analyses.
Source: Noah Wire Services
- https://www.techzine.eu/news/applications/127415/google-uses-anthropics-claude-to-improve-gemini-ai/ – This article corroborates the comparative analysis between Gemini and Claude, including the use of internal communications and the lack of confirmation from Google on permission to use Claude.
- https://www.techzine.eu/news/applications/127415/google-uses-anthropics-claude-to-improve-gemini-ai/ – It supports the practice of evaluating AI outputs against those from rival systems and the allocation of up to 30 minutes per prompt for comparisons.
- https://www.techzine.eu/news/applications/127415/google-uses-anthropics-claude-to-improve-gemini-ai/ – The article mentions the appearance of references to Claude in the internal Google platform and the differing methodologies in addressing potentially hazardous queries.
- https://www.medianama.com/2024/12/223-google-gemini-ai-promts-contractors-beyond-expertise/ – This article supports the information that contractors are now required to evaluate prompts even beyond their domain expertise, which is relevant to the broader context of Gemini’s development.
- https://www.medianama.com/2024/12/223-google-gemini-ai-promts-contractors-beyond-expertise/ – It highlights the concerns among contractors about the potential for Gemini to generate inaccurate information on sensitive topics.
- https://www.techzine.eu/news/applications/127415/google-uses-anthropics-claude-to-improve-gemini-ai/ – Shira McNamara’s statement that DeepMind does not train Gemini using Anthropic models is corroborated here.
- https://www.techzine.eu/news/applications/127415/google-uses-anthropics-claude-to-improve-gemini-ai/ – The article clarifies that any assertions suggesting a misuse of Claude’s systems in connection with Gemini are inaccurate, according to McNamara.
- https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude – This page provides details on Anthropic’s Claude models, which are being compared with Gemini, and their various capabilities.
- https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude – It supports the information about the different versions of Claude models and their use cases, which are relevant to the comparative analysis.
- https://www.techzine.eu/news/applications/127415/google-uses-anthropics-claude-to-improve-gemini-ai/ – The commercial terms governing Anthropic’s AI models and the prohibition on using them for developing competing products are discussed here.
- https://www.techzine.eu/news/applications/127415/google-uses-anthropics-claude-to-improve-gemini-ai/ – The article mentions Google’s investment in Anthropic, complicating the relationship between the two entities.


