The Shanghai Artificial Intelligence Laboratory recently introduced XTuner, a low-cost large-model training toolkit. It supports multiple hardware adaptations, allowing developers to train "custom large models" tailored to specific needs with as little as 8GB of consumer-grade graphics memory, significantly lowering the cost threshold for training. XTuner is compatible with several open-source large models such as Intern·PuYu and Llama, and can perform tasks like incremental pre-training and instruction fine-tuning. It balances ease of use with configurability, incorporating standardized processes for incremental pre-training, single-round & multi-round dialogue instruction fine-tuning, and tool-based instruction fine-tuning, enabling developers to focus solely on the data. XTuner further solidifies the practical tool attributes of the Shanghai AI Lab's comprehensive open-source system for large-model research and application, collaborating with various sectors to drive technological advancement.
Shanghai AI Lab Releases Large Model Training Toolbox XTuner, Significantly Reducing Training Costs

上海人工智能实验室
This article is from AIbase Daily
Welcome to the [AI Daily] column! This is your daily guide to exploring the world of artificial intelligence. Every day, we present you with hot topics in the AI field, focusing on developers, helping you understand technical trends, and learning about innovative AI product applications.