E^2-LLM
Efficient Extreme Extended Large Language Model
CommonProductProductivityLarge Language ModelEfficient Computing
E^2-LLM is an efficient extreme extension large language model method that effectively supports long context tasks through a single training process and significantly reduced computational cost. The method employs RoPE positional embeddings and introduces two distinct enhancement methods aimed at enhancing the model's robustness during inference. Comprehensive experimental results on multiple benchmark datasets have demonstrated the effectiveness of E^2-LLM in challenging long context tasks.
E^2-LLM Visit Over Time
Monthly Visits
17788201
Bounce Rate
44.87%
Page per Visit
5.4
Visit Duration
00:05:32