Liquid AI recently unveiled its new model, "Hyena Edge," ahead of the International Conference on Learning Representations (ICLR) 2025. This convolutional multi-hybrid model is designed to provide more efficient AI solutions for smartphones and other edge devices. Founded in Boston and stemming from MIT, the company aims to surpass the Transformer architecture relied upon by most large language models (LLMs).

QQ_1745721153040.png

Hyena Edge excels in both computational efficiency and language model quality. Real-world testing on a Samsung Galaxy S24 Ultra demonstrated lower latency, reduced memory footprint, and superior performance compared to a Transformer++ model with the same parameters across various benchmarks. This new architecture signifies a new era in edge AI technology.

Unlike most small models designed for mobile deployment, Hyena Edge forgoes the traditional heavy attention mechanism. Instead, it leverages Hyena-Y series gated convolutions, replacing two-thirds of the grouped query attention (GQA) operations. Hyena Edge's architecture originates from Liquid AI's "Synthesizing Target Architectures" (STAR) framework, which uses evolutionary algorithms to automatically design model structures, optimizing for multiple hardware-specific goals such as latency, memory usage, and model quality.

To validate Hyena Edge's real-world capabilities, Liquid AI conducted tests on a Samsung Galaxy S24 Ultra. Results showed that the model's pre-filling and decoding latency was 30% faster than Transformer++ at longer sequence lengths. Furthermore, Hyena Edge consistently demonstrated lower memory usage than traditional models across all tested sequence lengths, making it ideal for resource-constrained environments.

QQ_1745721067890.png

In benchmark testing, Hyena Edge, trained on 100 billion tokens, performed exceptionally well across several standard small language model tests, including Wikitext, Lambada, PiQA, HellaSwag, Winogrande, ARC-easy, and ARC-challenge. It showed significant improvements in perplexity scores on Wikitext and Lambada, and accuracy gains on PiQA, HellaSwag, and Winogrande.

Liquid AI plans to open-source a series of Liquid foundation models, including Hyena Edge, in the coming months. The goal is to build efficient, general-purpose AI systems that scale from cloud data centers to personal edge devices. Hyena Edge's success lies not only in its outstanding performance metrics but also in showcasing the potential of automated architecture design, setting a new standard for future edge-optimized AI.

Official Blog: https://www.liquid.ai/research/convolutional-multi-hybrids-for-edge-devices

Key Highlights:

🌟 Hyena Edge is Liquid AI's new convolutional model, specifically designed for edge devices like smartphones.

🚀 It outperforms traditional Transformer++ models in computational efficiency and memory usage, making it suitable for resource-constrained environments.

📈 Hyena Edge demonstrates excellent performance across multiple standard language model benchmarks and is planned for open-sourcing to promote wider adoption.