Scroll to read more

As generative AI tools continue to be integrated into various ad creation platforms, while also seeing expanded use in more general context, the question of legal copyright over the usage of generative content looms over everything, as various organizations try to formulate a new way forward on this front.

As it stands right now, brands and individuals can use generative AI content, in any way that they choose, once they’ve created it via these evolving systems. Technically, that content did not exist before the user typed in their prompt, so the ‘creator’ in a legal context would be the person who entered the query.

Though that’s also in question. The US Copyright Office says that AI-generated images actually can’t be copyrighted at all, as an element of ‘human authorship’ is required for such provision. So there may actually be no ‘creator’ in this sense, which seems like a legal minefield within itself.

Technically, as of right now, this is how the legal provisions stand on this front, while a range of artists are seeking changes to protect their copyrighted works, with the highly litigious music industry now also entering the fray, after an AI-generated track by Drake gained major notoriety online.

https://www.youtube.com/watch?v=81Kafnm0eKQ

Indeed, the National Music Publishers Association has already issued an open letter which implores Congress to review the legality of allowing AI models to train on human-created musical works. As they should – this track does sound like Drake, and it does, by all accounts, impinge on Drake’s copyright, being his distinctive voice and style, as it wouldn’t have gained its popularity without that likeness.

There does seem to be some legal basis here, as there is in many of these cases, but essentially, right now, the law has simply not caught up to the usage of generative AI tools, and there’s no definitive legal instrument to stop people from creating, and profiting from AI-generated works, no matter how derivative they might be.

And this is aside from the misinformation, and misunderstanding, that’s also being sparked by these increasingly convincing AI-generated images.

There have been several major cases already where AI-generated visuals have been so convincing that they’ve sparked confusion, and even had impacts on stock prices as a result.

The AI-generated ‘Pope in a puffer jacket’, for example, had many questioning its authenticity.

Pope in a Puffer Jacket

While more recently, an AI-generated image of an explosion outside the Pentagon sparked a brief panic, before clarification that it wasn’t a real event.

Within all of these cases, the concern, aside from copyright infringement, is that we soon won’t be able to tell what’s real and authentic, and what’s not, as these tools get better and better at replicating human creation, and blurring the lines of creative capacity.

Microsoft is looking to address this with the addition of cryptographic watermarks on all of the images generated by its AI tools – which is a lot, now that Microsoft has partnered with OpenAI, and is looking to integrate OpenAI’s systems into all of its apps.

Working with The Coalition for Content Provenance and Authority (C2PA), Microsoft’s looking to add an extra level of transparency to AI-generated images by ensuring that all of its generated elements have these watermarks built into their metadata, so that viewers will have a means to confirm whether any image is actually real, or AI created.

Though that can likely be negated by using screenshots, or other means that strip the core data coding. It’s another measure, for sure, and potentially an important one, but again, we simply don’t have the systems in place to ensure absolute detection and identification of generative AI images, nor the legal basis to enforce infringement within such, even with those markers being present.

What does that mean from in a usage context? Well, right now, you are indeed free to use generative AI content, for personal or business reasons, though I would tread carefully if you wanted to, say, use a celebrity likeness.

It’s impossible to know how this will change in future, but AI-generated endorsements like the recent fake Ryan Reynolds ad for Tesla (which is not an official Tesla promotion) seem like a prime target for legal reproach.

That video has been pulled from its original source online, which suggests that while you can create AI content, and you can replicate the likeness of a celebrity, with no definitive legal recourse in place as yet, there are lines that are being drawn, and provisions that are being set in place.

And with the music industry now paying attention, I suspect that new rules will be drawn up sometime soon to restrict what can be done with generative AI tools in this respect.

But for backgrounds, minor elements, for content that’s not clearly derivative of an artist’s work, you can indeed use generative AI, legally, within your business content. That also counts for text – though make sure you double and triple check, because ChatGPT, in particular, has a propensity to make things up.