Qwen1.5-110B

The first petabyte-scale open-source model in the Qwen1.5 series, supporting multilingual functionality and an efficient Transformer decoder architecture.

ChineseSelectionProductivityArtificial IntelligenceMachine Learning
Qwen1.5-110B, the largest model in the Qwen1.5 series, boasts 11 billion parameters, multilingual support, and an efficient Transformer decoder architecture with Group Query Attention (GQA) for enhanced inference efficiency. It equals Meta-Llama3-70B in baseline performance evaluations and shines in Chat evaluations, including MT-Bench and AlpacaEval 2.0. The release of this model showcases vast potential for performance improvements through scaling model size and indicates that future enhancements will come from expanding data and model scale.
Visit

Qwen1.5-110B Visit Over Time

Monthly Visits

396022

Bounce Rate

59.53%

Page per Visit

1.7

Visit Duration

00:01:06

Qwen1.5-110B Visit Trend

Qwen1.5-110B Visit Geography

Qwen1.5-110B Traffic Sources

Qwen1.5-110B Alternatives