Meta Releases Llama 2-Long Model, Reducing Computational Demand for Long Text Processing by 40%
站长之家
45
The data to be translated:
Meta has released the Llama2-Long model, which handles long texts without increasing computational demands while maintaining excellent performance. By employing continuous pre-training, improved positional encoding, and data mixing strategies, it reduces computational overhead by up to 40%. It excels in both short and long tasks, even surpassing GPT-3.5. This release has infused new vitality into the field of natural language processing.
© Copyright AIbase Base 2024, Click to View Source - https://www.aibase.com/news/1979