Recently, Charlie George, a machine learning engineer at Elicit, published an article on the LangChain blog stating that they have fine-tuned ChatGPT using synthetic data to surpass GPT-4 in news summarization tasks. The study employed chain-of-density prompting to fine-tune ChatGPT, surpassing GPT-4 zero-shot in automatic evaluation metrics and approaching the level of GPT-4 chain-of-thought prompting. Additionally, fine-tuned ChatGPT is 11 times faster than GPT-4 zero-shot and 33 times faster than GPT-4 chain-of-thought prompting, with costs reduced by 63% and 84% respectively. The results indicate that fine-tuned ChatGPT can achieve news summarization quality comparable to GPT-4 chain-of-thought prompting, but with significantly better computational costs and speed. This method provides a viable path for the large-scale deployment of next-generation AI applications.