Large language models like ChatGPT memorize a certain amount of original training data during their training process. Attackers can extract a significant amount of training data from the model using specific attack methods, threatening the privacy of data owners. Researchers suggest that when developing and using large language models, measures should be taken to protect data security and prevent data leakage.
Research on the Risks of Training Data Leakage in ChatGPT
AIGC开放社区
61
© Copyright AIbase Base 2024, Click to View Source - https://www.aibase.com/news/4078