I totally cannot see this being used for ill-purpose, and misrepresentation in all new ways.
As you’re likely aware, in recent months, AI-generated art has become a new trend, with tools like DALL·E and Midjourney facilitating all new creation processes, which enable anybody to create strange, unique, and sometimes beautiful digital artworks based on text prompts.
These tools source a range of images from across the web – the same way that, say, Google Images shows you visual examples of any term you enter in – and then essentially ‘samples’ and merges them all into new artworks, based on how the system understands your text prompts.
Which is an interesting use of AI and machine learning. But what if you could take it further? What if you could also create videos in the same way?
Evidently, Meta had the capacity to find out.
We’re pleased to introduce Make-A-Video, our latest in #GenerativeAI research! With just a few words, this state-of-the-art AI system generates high-quality videos from text prompts.
Have an idea you want to see? Reply w/ your prompt using #MetaAI and we’ll share more results. pic.twitter.com/q8zjiwLBjb
— Meta AI (@MetaAI) September 29, 2022
The above clips are examples of Meta’s new Make-A-Video AI system, which enables people to turn text prompts into ‘brief, high-quality video clips’.
As explained by Meta:
“[Make-A-Video] uses images with descriptions to learn what the world looks like and how it is often described. It also uses unlabeled videos to learn how the world moves. With this data, Make-A-Video lets you bring your imagination to life by generating whimsical, one-of-a-kind videos with just a few words or lines of text.”
The process also enables users to create variations of video clips, or add motion to a static image, which could provide new capacity for video creation, in a range of ways.
But the results that Meta’s currently touting are a little freaky.
Maybe my favorite AI generated video so far. Prompt: “A fluffy baby sloth with an orange knitted hat trying to figure out a laptop, close up, highly detailed, studio lighting, screen reflecting in its eye.mp4”
killer work @MetaAI !! pic.twitter.com/Lvlrl3rWdG— Mike Schroepfer (@schrep) September 29, 2022
Yeah, that’s a totally normal-looking thing that won’t haunt my dreams.
Yep, absolutely cool – and absolutely does not look like a grainy shot from a found footage horror movie.
Of course, it’s early days, and the system is still evolving. But as noted, you can already see how this process could end up being used for ill-purpose, either to defame, dehumanize or otherwise create frightening, offensive content, based on text prompts.
Meta is working to negate this.
In its research notes, Meta states that:
“As a way to reduce the risk of harmful content being generated, we examine, applied, and iterated on filters to reduce the potential for harmful content to surface in videos.”
You would imagine that that would also include the removal of public figures from the database, which could rapidly become an issue, as well as restrictions on potentially offensive terms as the source prompt.
But then again, given the scope of source content, there are probably many ways that this could all go wrong, with somebody, somewhere, already cooking up some previously non-existent concept that’ll break the system entirely.
That’s what’s happened with many other AI-sourced tools. For example, back in 2016, Microsoft launched a Twitter account to show off the capacity of its conversational AI response tool called ‘Tay’, which was able to reply to people’s questions via tweet. Within 24 hours, Tay was tweeting out misogynistic, racist remarks.
The internet is a representation of humankind, good and bad, and more often than not, it’s the bad that shows up, especially when people are given a challenge, and a means to embarrass smart folk working on clever concepts, like AI research.
Which is also likely why Meta’s not releasing Make-A-Video to the public just yet.
“Our goal is to eventually make this technology available to the public, but for now we will continue to analyze, test, and trial Make-A-Video to ensure that each step of release is safe and intentional.”
That’s good, especially for Meta, which has a history of moving fast and breaking things. But still, it still feels questionable, it feels like this tech could be broken, and could end up being used for ill-purpose.
Or maybe it’ll just be used to create freaky videos that you can share with your friends.
‘Here, Martha, is a bald sloth that looks like it’s broken out of the Dark Crystal world in order to hack into your PC and steal your shameful secrets – cool right?’
I don’t know, the technological advancement on display is amazing, but are tools like this actually beneficial overall?
There are also questions around copyright, and the artists who lose out, despite their source material powering these creations.
For brands, it could end up being a valuable consideration, as a means to create video content (freaky as it may be) that you can then use in your campaigns and social posts – and potentially, that usage would be within legal grounds, as is currently the case with the current crop of AI art tools.
But still, it feels like, on the whole, the net impact is maybe not positive?
Viewing it from another perspective, it’s interesting to note how Meta’s looking to give creators and artists more ways to make money from their work in its apps, through new monetization processes and options, yet at the same time, these AI tools de-value artist content.
It’s a conflict that’s become part of Meta’s framework, inadvertently or not, where the overall benefit of its tools ends up becoming blurred by conflicting motivations, and its efforts to keep pushing things forward, without full consideration of the impacts.
Essentially, AI-generated tools like this look set to spark a new battleground over ‘fair use’ in such, which could eventually see all of them shut down either way – so if you are interested in creating your own weirdo videos, you’d want to hope that Meta makes it available soon.
For now, however, the application is only available to researchers, who can sign-up to get access.
You can read more about the ‘Make-A-Video’ system here.