MiniMax-Text-01
MiniMax-Text-01 is a powerful language model with a total of 456 billion parameters, capable of handling a context of up to 4 million tokens.
CommonProductProductivity\language modeltext generation
MiniMax-Text-01 is a large language model developed by MiniMaxAI, boasting a total of 456 billion parameters, where each token activates 45.9 billion parameters. It employs a hybrid architecture that integrates lightning attention, softmax attention, and mixture of experts (MoE) techniques. Through advanced parallel strategies and innovative compute-communication overlap methods, such as Linear Attention Sequence Parallelism Plus (LASP+), variable-length circular attention, and Expert Tensor Parallelism (ETP), it expands the training context length to 1 million tokens and can handle up to 4 million tokens in inference. MiniMax-Text-01 has demonstrated top-tier model performance across multiple academic benchmark tests.
MiniMax-Text-01 Visit Over Time
Monthly Visits
21315886
Bounce Rate
45.50%
Page per Visit
5.2
Visit Duration
00:05:02