Scroll to read more

YouTube has announced a new program designed to better manage the use of generative AI in music content, while it’s also looking to evolve its Content ID policies to take into account new generative AI use-cases, including remixes and replicas based on AI likenesses in the app.

This came to a head as an issue earlier this year, with the launch of a highly convincing generative AI track that featured the voice of Drake, as powered by an AI system. That was a call to arms of sorts for the music industry, prompting labels to take more action to protect their copyright, while also moving into line with evolving AI use cases, that are only likely to expand.

Which is really where this new push from YouTube is headed, with CEO Neal Mohan outlining three new AI usage principles that will guide the platform’s decisions on such moving forward.

The three new principles essentially boil down to: “People are going to use AI to violate copyright, unless we establish new rules to also cover simulated use of artists’ work”.

Along this line, YouTube’s working with its music partners to develop a new AI framework, which will begin with a new “Music AI Incubator” initiative, that will seek to push the boundaries of what AI can do, and how YouTube can track and measure such moving forward.

As per YouTube:

“To kick off the program, we’re working with Universal Music Group – a leader in the space – and their incredible roster of talent. This will help gather insights on generative AI experiments and research that are being developed at YouTube. Working together, we will better understand how these technologies can be most valuable for artists and fans, how they can enhance creativity, and where we can seek to solve critical issues for the future.”

The initial program will include a range of works that will be incorporated into AI experiments, to see what creators come up with, and then, how YouTube can detect and track such usage.

YouTube’s also looking to advance its Content ID system, to ensure that AI usage is covered within its detection remit.

“Content ID, our best-in-class rights management technology, ensures rights holders get paid for use of their content and has generated billions for the industry over the years. A new era of generated content is here, and it gives us an opportunity to reimagine and evolve again. We’re eager to further build on our focus of helping artists and creators make money on YouTube and will continue to do so in collaboration with our partners.

That could be difficult, given that Content ID is based on existing examples to determine re-use. But the idea seems to be that Content ID can also be trained on specific voices and styles, in order to expand its capacity to highlight replication via AI.

Finally, YouTube’s also looking to evolve its tools to detect AI misuse, including generative AI creations that depict celebrities, including musicians, doing things that they did not.

“We’ll continue to invest in the AI-powered technology that helps us protect our community of viewers, creators, artists and songwriters – from Content ID, to policies and detection and enforcement systems that keep our platform safe behind the scenes. And we commit to scaling this work even further.

Essentially, YouTube’s saying that it’s taking the threat of AI copyright infringement seriously, and it’s now working with the labels themselves to better detect such, while also facilitating a level of use, in line with trends.

Ideally, that will eventually enable YouTube to ensure that artists get paid, even for AI versions of their work, though there’s still a way to go in establishing copyright and ownership in generative AI use cases.

Through these new initiatives, YouTube’s hoping that it can be at the forefront of this next wave.