Translated data: Researchers from ETH Zurich, Switzerland, stated in a paper that large language models like GPT-4 have the ability to automatically infer personal privacy information such as users' age, gender, and location from public forum posts. Experiments on the Reddit dataset show that GPT-4's prediction accuracy exceeds 60% on multiple indicators. As the model scales up, its inference capabilities are also enhanced. The authors also demonstrated the feasibility of extracting private information through dialogue robot experiments. Experts warn that it is almost impossible to identify and remove personal information from massive training data, and the pace of multi-faceted privacy protection measures currently lags behind the rapid development of models.