2025-01-15 09:21:34.AIbase.14.7k
MiniMax Open Sources MiniMax-01 New Series Models, Performance Comparable to GPT-4o
On January 15, 2025, MiniMax announced the open sourcing of its new series of models, MiniMax-01, which includes the foundational language model MiniMax-Text-01 and the visual multimodal model MiniMax-VL-01. The MiniMax-01 series features bold innovations in its architecture, implementing linear attention mechanisms on a large scale for the first time, breaking the limitations of traditional Transformer architectures. With a parameter count reaching 456 billion and 45.9 billion activations per instance, its overall performance competes with international standards.