Useful Sensors, a U.S.-based startup, has introduced an open-source speech recognition model called Moonshine. Designed to process audio data more efficiently, Moonshine uses less computational resources and is five times faster than OpenAI's Whisper. This new model is tailored for real-time applications on hardware with limited resources, featuring a flexible architecture.

image.png

Unlike Whisper, which processes audio in fixed 30-second segments, Moonshine adjusts its processing time based on the actual length of the audio. This makes it particularly effective for shorter audio clips, reducing processing overhead caused by zero-padding.

Moonshine comes in two versions: a small Tiny version with 27.1 million parameters, and a large Base version with 61.5 million parameters. In contrast, OpenAI's comparable models have larger parameter counts: Whisper tiny.en has 37.8 million, and base.en has 72.6 million.

image.png

Testing results indicate that the Tiny model of Moonshine is as accurate as Whisper while consuming fewer computational resources. Both versions of Moonshine have lower Word Error Rates (WER) than Whisper under various audio levels and background noises, demonstrating robust performance.

The research team notes that Moonshine still has room for improvement in processing extremely short audio clips (less than one second), which are underrepresented in the training data. Increasing the training on such clips could enhance the model's performance.

Moreover, Moonshine's offline capabilities open up new application scenarios, making previously hardware-restricted applications feasible. Unlike Whisper, which requires higher power consumption, Moonshine is suitable for running on smartphones and small devices like Raspberry Pi. Useful Sensors is utilizing Moonshine to develop its English-Spanish translator, Torre.

The code for Moonshine has been released on GitHub. Users should be aware that AI transcription systems like Whisper can make errors. Some studies show that Whisper has a 1.4% chance of generating false information, especially for people with speech impairments, where the error rate is higher.

Project link: https://github.com/usefulsensors/moonshine

Key Points:

🌟 Moonshine is an open-source speech recognition model that is five times faster than OpenAI's Whisper.

🔍 The model adjusts processing time based on audio length, making it ideal for short audio clips.

🖥️ Moonshine supports offline operation, suitable for use on hardware with limited resources.