After AI startup Antropic launched version 2.1 of Claude, many users found that Claude had become difficult to use, often refusing to execute commands. The reason is that Claude 2.1 adheres to Antropic's AI constitution, which is more cautious in terms of safety and ethics. This has led to strong dissatisfaction among many paying users, who are preparing to cancel their subscriptions. Industry insiders worry that Antropic's trade-off between ensuring AI safety and sacrificing some model performance may put it at a disadvantage in the increasingly fierce AI competition.
A Safer AI Assistant, More Likely to be Abandoned by Users?

36氪
This article is from AIbase Daily
Welcome to the [AI Daily] column! This is your daily guide to exploring the world of artificial intelligence. Every day, we present you with hot topics in the AI field, focusing on developers, helping you understand technical trends, and learning about innovative AI product applications.