Chinese scholars have successfully cracked the black box of large language models, revealing interpretable representations within Llama 2. Research shows that Llama 2 has made significant progress in controlling text generation, including adjusting the text produced by the model and reducing memorized outputs. This breakthrough brings Llama 2's performance close to that of GPT-4, while also demonstrating the effectiveness of researchers in monitoring and controlling the model's honesty. The article mentions the performance advancements of other similar models and how they better integrate with business scenarios, collectively driving the development of large models. Additionally, it introduces the Japanese LLM based on Llama 2 and domestic open-source large models, showcasing the latest developments in the field of artificial intelligence.