Recently, Anthropic's AI chatbot, Claude, has once again found itself at the center of performance controversies. A post on Reddit claiming that "Claude has become much stupider lately" has garnered significant attention, with many users reporting a decline in Claude's capabilities, including reduced memory and diminished coding abilities.

In response, Anthropic executive Alex Albert stated that the company's investigation "has not found any widespread issues" and confirmed that there have been no changes to the Claude 3.5 Sonnet model or the inference pipeline. To enhance transparency, Anthropic has published the system prompts for the Claude model on their official website.

QQ20240829-093549.png

This pattern of user feedback indicating AI degradation while the company denies it is not new. At the end of 2023, OpenAI's ChatGPT also faced similar skepticism. Industry insiders suggest that several factors could contribute to this phenomenon, including rising user expectations over time, natural variations in AI outputs, and temporary computational resource limitations.

However, even without significant changes to the underlying model, these factors can still lead to a perceived decline in performance. OpenAI has previously pointed out that the unpredictable nature of AI behavior makes maintaining and evaluating the performance of large-scale generative AI a significant challenge.

QQ20240829-093611.png

Anthropic pledges to continue monitoring user feedback and work on improving Claude's performance stability. This incident underscores the challenges AI companies face in maintaining model consistency and the importance of enhancing AI performance evaluation and communication transparency.