After AI startup Antropic launched version 2.1 of Claude, many users found that Claude had become difficult to use, often refusing to execute commands. The reason is that Claude 2.1 adheres to Antropic's AI constitution, which is more cautious in terms of safety and ethics. This has led to strong dissatisfaction among many paying users, who are preparing to cancel their subscriptions. Industry insiders worry that Antropic's trade-off between ensuring AI safety and sacrificing some model performance may put it at a disadvantage in the increasingly fierce AI competition.