Website Home (ChinaZ.com) June 12 News: Apple Inc. has recently unveiled the core technology behind its artificial intelligence platform, Apple Intelligence—a device-side model with approximately 30 billion parameters and a server-based language model. In a series of evaluations, Apple Intelligence outperformed open-source models including Phi-3, Gemma, Mistral, and DBRX, as well as commercial models like GPT-3.5-Turbo and GPT-4-Turbo, often being preferred by human evaluators.
Optimization and Application of Apple Intelligence
The foundational model of Apple Intelligence is optimized for tasks such as text writing and polishing, prioritizing and summarizing notifications, creating engaging images for conversations with family and friends, and simplifying in-app interactions. The model is trained using the open-source AXLearn framework, incorporating advanced techniques such as data parallelism, tensor parallelism, sequence parallelism, and fully sharded data parallelism (FSDP), enabling an efficient and scalable training process. Data sources include authorized data and publicly crawled data by Apple Bots, all of which are rigorously filtered to protect user privacy.
Apple Intelligence includes multiple powerful generative models designed specifically for users' daily tasks and capable of immediate adjustments based on current activities. The foundational model has been fine-tuned to enhance the user experience, including writing and optimizing text, prioritizing and summarizing notifications, creating interesting images for conversations, and simplifying interactions across applications.
Apple has also announced a set of responsible AI principles to guide the development of AI tools and their foundational models. These principles include:
Providing intelligent tools for users: Identifying areas where AI can be used responsibly, creating tools that meet specific user needs, and respecting how users utilize these tools.
Representing our users: Building highly personalized products that truly represent a global user base, avoiding perpetuating stereotypes and systemic biases in AI tools and models.
Careful design: Taking preventive measures at every stage of design, model training, feature development, and quality assessment to prevent misuse of AI tools or potential harm, and continuously improving through user feedback.
Protecting privacy: Safeguarding user privacy through robust local processing and private cloud computing infrastructure, ensuring that private user data or interaction data is not used in training foundational models.
These principles are embedded throughout the architecture of Apple Intelligence, ensuring that features and tools are connected with dedicated models and provide the necessary information for responsible operation of each feature.