Google DeepMind recently launched an updated version of its AI music production tool, MusicFX DJ, designed to serve users with or without professional music knowledge. The company first showcased this software at this year's Google I/O conference.

Unlike conventional DJ software that mixes pre-recorded tracks, MusicFX DJ can generate brand new music in real-time. Users simply input what they want—such as specific genres, instruments, or moods—and the AI will create it instantly.

Google DeepMind stated that the system has undergone two major improvements. Firstly, it can now stream music in real-time by adapting previously offline models. The software creates the next section of music based on what it has already produced and the user's instructions.

image.png

Secondly, users can now input multiple prompts at different times, mixing different musical elements, similar to how DJs layer tracks.

MusicFX DJ can produce music in 48kHz stereo studio quality audio, a standard reached by competitor Udio in July with its 1.5 version. Users can export up to 60 seconds of clips and share their sessions with others.

More precise control Google DeepMind stated that the new version of MusicFX DJ provides users with more control over their music. Users can adjust the orchestration, insert musical gaps, and create bass drops at any time. The software also allows you to adjust the speed and musical key while creating.

image.png

Meanwhile, Google is testing an AI music feature called "Dream Track" on YouTube. This experiment allows creators in the US to create instrumental tracks simply by inputting what they want. Like all Google's AI-generated content, these music tracks are watermarked with SynthID—the same technology Google recently started using for text.

Currently, these tools are not available to the public. Only selected testers can access the music AI sandbox, and Dream Track is limited to US creators. Google said it will roll out successful features to other products over time.

The music industry has supported this project through Google's "Music AI Incubator". Six-time Grammy winner Jacob Collier helped develop the tool, describing it as "real-time sound putty" that can create surprising connections between unlikely musical elements.