Large language models like ChatGPT memorize a certain amount of original training data during their training process. Attackers can extract a significant amount of training data from the model using specific attack methods, threatening the privacy of data owners. Researchers suggest that when developing and using large language models, measures should be taken to protect data security and prevent data leakage.