The Shanghai Artificial Intelligence Laboratory recently introduced XTuner, a low-cost large-model training toolkit. It supports multiple hardware adaptations, allowing developers to train "custom large models" tailored to specific needs with as little as 8GB of consumer-grade graphics memory, significantly lowering the cost threshold for training. XTuner is compatible with several open-source large models such as Intern·PuYu and Llama, and can perform tasks like incremental pre-training and instruction fine-tuning. It balances ease of use with configurability, incorporating standardized processes for incremental pre-training, single-round & multi-round dialogue instruction fine-tuning, and tool-based instruction fine-tuning, enabling developers to focus solely on the data. XTuner further solidifies the practical tool attributes of the Shanghai AI Lab's comprehensive open-source system for large-model research and application, collaborating with various sectors to drive technological advancement.
Shanghai AI Lab Releases Large Model Training Toolbox XTuner, Significantly Reducing Training Costs
上海人工智能实验室
142
© Copyright AIbase Base 2024, Click to View Source - https://www.aibase.com/news/1479