Apple is set to launch its first batch of AI features—Apple Intelligence, designed for iPhone, iPad, and Mac devices. However, as AI technology advances, the issue of privacy becomes increasingly crucial. So, how does Apple Intelligence safeguard users' privacy?
Apple's privacy protection strategy prioritizes device-centric approaches, followed by private cloud computing. This method offers two main advantages: firstly, faster processing speeds within the device; secondly, user data can be securely kept localized, thereby maximizing privacy protection.
In most cases, Apple Intelligence will operate entirely on the device, without sending any data to the cloud. However, in certain situations, Apple Intelligence requires the use of external servers for additional processing. In such cases, Apple employs private cloud computing, aiming to provide the same high level of security as on-device processing.
Apple's private cloud computing adheres to five core requirements, including stateless computation of personal user data, executable guarantees, no privileged runtime access, non-targeting, and verifiable transparency. These requirements are designed to ensure the security and privacy of user data.
Additionally, Apple Intelligence will integrate third-party services such as ChatGPT, but will seek user consent before use. User data will be sent to third-party servers and protected under their own privacy policies.
Apple Intelligence's privacy protection strategy is aimed at continuing Apple's commitment to user privacy. By prioritizing devices and utilizing private cloud computing, Apple Intelligence strives to provide a secure and private AI experience.