In the digital battlefield, the music industry is waging an unprecedented defense. Facing the threat of deepfakes powered by artificial intelligence, record labels, artists, and producers are fighting back through various channels, but the path to legal protection is fraught with challenges.

Sony Music recently revealed it has requested the removal of as many as 75,000 deepfake tracks. This staggering number highlights the severity of the current AI infringement problem. Information security firm Pindrop notes that AI-generated music typically has "telltale characteristics," theoretically making it easy to identify – "even if it sounds realistic, AI-generated songs often have subtle irregularities in frequency changes, rhythm, and digital patterns that wouldn't appear in human performances."

However, the reality is worrying. Spend a few minutes on YouTube or Spotify, top music streaming platforms, and you'll easily find fake rap songs of 2Pac singing about pizza, or non-existent Ariana Grande covers of K-pop songs. Responding to this, Spotify's Head of Policy, Sam Duboff, stated: "We take this issue very seriously and are working to develop new tools to improve the situation." YouTube claims it's "improving" its ability to identify AI-generated imitations, promising an announcement in the coming weeks.

Emarketer analyst Jeremy Goldman succinctly points out: "Bad actors are always one step ahead," leaving artists, record labels, and other music industry players "playing catch-up." However, Goldman also notes that YouTube, as a multi-billion dollar tech giant, has a strong incentive to solve the problem: "You don't want your platform to become an AI nightmare."

Music, Audio, Sound Waves

Image Source Note: Image generated by AI, image licensing provider Midjourney

Beyond deepfakes, the music industry is even more concerned about the unauthorized use of its content to train generative AI models like Suno, Udio, or Mubert. Last year, several major record labels sued Udio's parent company in New York federal court, alleging it used "copyrighted recordings" to develop its technology, ultimately aiming to "steal the audience, fans, and potential licensees of the recordings it copied." Similar lawsuits against Suno were filed in Massachusetts. However, over nine months later, these lawsuits haven't truly begun.

At the heart of these lawsuits is the principle of "fair use," the limited use of copyrighted material without prior permission. Vanderbilt University law professor Joseph Fishman calls this "a truly uncertain area." Because different courts may have different opinions, preliminary rulings aren't necessarily decisive, and the ultimate question might reach the Supreme Court.

Meanwhile, major players in AI-generated music continue to use copyrighted works to train their models, raising the question: Is this battle already lost? Fishman believes it's too early to tell: While many models have already been trained on protected material, new versions of these models are constantly being released, and it's unclear whether any court ruling will affect licensing for these models going forward.

In the legislative arena, record labels, artists, and producers have seen limited success. Several bills have been introduced in the US Congress but haven't yielded substantial results. Some states, notably Tennessee with its strong country music industry, have passed protective legislation, particularly targeting deepfakes.

President Trump might be another potential hurdle, positioning himself as a proponent of AI deregulation. Several AI giants, notably Meta, have urged the government to "clarify that using publicly available data to train models is undoubtedly fair use." If the Trump administration adopts this recommendation, it could tip the scales against music professionals, even if courts theoretically have the final say.

The situation in the UK isn't much better. The Labour government is considering amending laws to allow AI companies to use creators' content from the internet to help develop models, unless rights holders opt out. To protest these efforts, over a thousand musicians, including Kate Bush and Annie Lennox, released an album called "Is This What We Want?" in February, featuring a "sound of silence" recorded in multiple studios.

Analyst Goldman believes that as long as the music industry remains fragmented, the AI problem will continue to plague it: "The music industry is too decentralized, ultimately putting it at a disadvantage when it comes to tackling this issue."