Social networking platform X (formerly Twitter) updated its privacy policy on Wednesday, sparking widespread attention and discussion. The new policy indicates that X will allow third-party "partners" to use platform data for AI model training by default, unless users actively opt out. This move not only highlights X's efforts to seek new revenue sources but also raises concerns about user data privacy protection.

The core changes in the new policy include:

Third-party data usage: New terms allow third-party partners to use X user data, including training AI models. Users can opt out, but participation is default.

Data retention period: Removed the previous statement about retaining personally identifiable information for up to 18 months, now determined based on different circumstances. The company states it will adopt different retention periods for different types of information based on service provision, legal compliance, and security considerations.

Content persistence reminder: Added a reminder that even if content is deleted from the X platform, public content may still exist elsewhere. This may involve AI providers' use of data, with X specifically noting that "search engines and other third parties may retain copies of your posts after they are deleted or expired from X, according to their own privacy policies."

Data scraping penalties: New terms include a "liquidated damages" clause, imposing hefty fines on organizations that scrape large amounts of content. Specifically, organizations "requesting, viewing, or accessing more than 1 million posts (including replies, videos, images, and other types of posts) in any 24-hour period" will be charged $15,000 per million posts.

Cybersecurity Privacy (2)

Image source: Generated by AI, image licensing service Midjourney

This policy update has raised various issues and concerns:

Privacy protection: Allowing third parties to use user data by default raises concerns about user privacy protection. Although users can opt out, many may not notice this change or understand how to do so.

Data control: Users' control over their data seems further weakened, which may contravene data protection regulations in some regions.

AI ethics: Allowing third parties to train AI models may lead to user data being used for unknown or improper purposes, raising AI ethics issues.

Lack of transparency: The new policy does not clearly explain how to opt out of the data sharing plan or list potential third-party partners.

Regulatory challenges: This policy may face scrutiny from data protection agencies worldwide, especially considering X owner Elon Musk's previous use of X data to train xAI's Grok chatbot has already drawn an investigation from EU privacy regulators.

It is noteworthy that this new policy will take effect on November 15, and an opt-out option may be added at that time. Currently, X's "Privacy and Security" settings allow users to turn on or off data sharing with xAI's Grok and other "business partners," but the latter are described as companies that collaborate with X to "operate and improve their products," not other AI providers.

X's move appears to be seeking new revenue sources to cope with the financial pressure from advertisers' retreat and boycott, as well as the unsuccessful subscription features. However, this also raises concerns about user privacy and data security.

For users, this policy change means they need to manage their privacy settings more carefully. Although the opt-out mechanism is currently unclear, users should closely monitor updates to the platform's privacy settings and adjust relevant options after the new policy takes effect.

For the entire tech industry, X's move may spark broader discussions about how social media platforms balance commercial interests with user privacy protection and the new challenges AI development poses for data usage and privacy protection. With the rapid advancement of AI technology and the critical role of data in AI training, we can expect more platforms to follow X's approach, further driving social dialogue and policy debates on data ownership, user privacy rights, and AI ethics.