Recently, Apple's latest research in the field of machine learning shows that they have successfully increased the generation speed of large language models (LLM) by nearly three times through collaboration with NVIDIA. The key to this advancement lies in Apple's open-source technology "Recurrent Drafter" (ReDrafter), which employs a speculative decoding method that significantly enhances the efficiency of model training.
In the past, the process of creating large language models was often very time-consuming and resource-intensive, requiring companies to purchase a large amount of hardware, thereby increasing operational costs. Earlier in 2024, Apple released ReDrafter, a technology that combines recurrent neural networks with dynamic tree attention methods, allowing for rapid generation and validation of tokens, improving token generation speed by 3.5 times compared to traditional autoregressive methods.
This week, Apple further announced that their collaboration with NVIDIA will integrate ReDrafter into NVIDIA's TensorRT-LLM inference acceleration framework. This move will enable machine learning developers using NVIDIA GPUs to leverage the acceleration capabilities of ReDrafter in production environments. It is noteworthy that while high-performance multi-GPU servers are typically expensive, this collaboration aims to reduce both latency and the amount of required hardware, providing a more economical solution.
In benchmark tests conducted with NVIDIA, the generation efficiency using ReDrafter saw a significant improvement, with the token generation speed in greedy encoding mode increasing by 2.7 times. This means developers can achieve more results in a shorter time, providing users with a faster service experience.
After confirming the partnership with NVIDIA, Apple also stated that they are considering using Amazon's Trainium2 chip to enhance model training efficiency. It is expected that pre-training with Trainium2 will improve efficiency by 50% compared to existing hardware.
Official Blog: https://developer.nvidia.com/blog/nvidia-tensorrt-llm-now-supports-recurrent-drafting-for-optimizing-llm-inference/
Key Points:
🌟 Apple collaborates with NVIDIA to increase the generation speed of large language models by nearly three times.
🚀 The open-source technology ReDrafter combines recurrent neural networks to significantly improve model training efficiency.
💰 This collaboration helps reduce costs, providing machine learning developers with a more efficient solution.