Recently, the research team at BitEnergy AI has developed a new algorithm called "Linear Complexity Multiplication" (L-Mul), which significantly reduces the energy consumption of artificial intelligence systems.

Specifically, this algorithm replaces complex floating-point multiplications with simpler integer additions, expected to achieve up to 95% energy savings across various fields. The study, titled "Energy-Efficient Language Models through Addition Only," showcases the immense potential of L-Mul in energy conservation.

Robot AI

Image source note: The image is generated by AI, provided by the image licensing service Midjourney.

The research team tested L-Mul on various tasks including language understanding, structural reasoning, mathematical calculations, and common sense questioning. The results show that L-Mul performs well across these tasks. They found that L-Mul can be directly applied to the core of modern language models—the attention mechanism—with almost negligible impact on model performance. As everyone knows, the attention mechanism plays a crucial role in modern language models like GPT-4.

BitEnergy AI's team stated that L-Mul not only helps enhance academic and economic competitiveness but also increases AI autonomy. They believe this technology can help large enterprises develop custom AI models faster and more economically.

In the future, they plan to implement L-Mul at the hardware level and develop programming interfaces for high-level model design, aiming to optimize text, symbolic, and multimodal generative AI models to adapt to L-Mul native hardware.

This innovative algorithm not only promises a significant reduction in energy consumption but also contributes to the further advancement of AI technology. With increasing demands for environmental protection and energy efficiency, the emergence of L-Mul undoubtedly brings new hope to the field of artificial intelligence.

Key Points:

🌱 L-Mul algorithm can reduce AI system energy consumption by up to 95%.  

🔍 The algorithm performs exceptionally well across various tasks, especially when applied to the attention mechanism of modern language models.  

🚀 BitEnergy AI plans to implement L-Mul at the hardware level and develop corresponding programming interfaces to optimize AI models.