Scroll to read more

Meta is determined to put more artists out of work, with the latest iteration of its generative AI project for music, called ‘AudioCraft’, now available for experimentation.

Similar to Meta’s ‘MusicGen’ generative audio process, AudioCraft enables you to create new music based on text prompts, so you can compose original music and sounds without the need for instruments, skill, etc.

As explained by Meta:

Imagine a professional musician being able to explore new compositions without having to play a single note on an instrument. Or a small business owner adding a soundtrack to their latest video ad on Instagram with ease. That’s the promise of AudioCraft – our latest AI tool that generates high-quality, realistic audio and music from text.

Conceptually it’s an interesting idea. You put in a prompt, like ‘Movie scene in a desert with percussion’, and the AudioCraft system will give you a matching audio sample, which you could then, at least theoretically, use in any context.

The new system incorporates Meta’s MusicGen system, which it also previewed back in June, along with AudioGen, another generative sample set. MusicGen has been trained on Meta-owned music samples, while AudioGen incorporates public sound effects, which expands the audio model, facilitating more intricate and interesting clips.

The opportunities of such are virtually endless. Much like generative AI visuals, audio creation from text will open up all new ways for people to create music, which will eventually open the door for AI users to become recording artists, without having to commit years of their life to, you know, learning how to be an actual artist.

Which also comes with various potential problems.

We’ve already seen some concerns, with a recent viral track featuring Drake and The Weeknd actually being fully AI-created, with no involvement from the artists themselves. That points to future disruption in the music industry, with AI tools enabling misuse of musicians’ work, and likeness, without definitive legal recourse.

Though you can bet that the notoriously litigious recording industry will be pushing for such very fast, in order to protect their golden geese.

The bottom line, however, as reflected by his, is that in any art, it does take a level of skill to create truly great works, which incorporate a human element that cannot be replicated by digital systems. Musicians, writers, painters, all of them have not succeeded due to technical skill alone. It also requires a level of connection with the work to use it as a medium for advanced communication.

AI tools can likely ‘do the things’ and create approximations of what art should be. But it’s unlikely, for the most part, that they’ll be able to tap into what makes the best artists truly successful.

But maybe tools like this will eventually prove that theory wrong, and as more people experiment with AI creation, there’s bound to be at least some great output that comes from that process.

And for marketers, tools like this could help to easily and quickly add in unique music for campaigns.

I mean, preferably we’d find a way to keep artists paid as well, but tools like this may also facilitate new opportunity.

You can read more about Meta’s ‘AudioCraft’ project here.