Meta’s implementing some new restrictions on the use of its generative AI features for ads, with political advertisers now banned from using the new tools, including background generation and image outcropping, due to concerns that they may contribute to the spread of misinformation.
Various analysts have raised concerns about the coming onslaught of AI-generated misinformation, with several political campaigns already using AI-created images to sway voters. A recent campaign by U.S. Presidential candidate Ron DeSantis, for example, used an AI-generated image of Donald Trump hugging Anthony Fauci, as well as a voice simulation of Trump in another push.
With things trending in a concerning direction, Meta has now added the following explainer to all of its Help Center articles relating to its new AI ad tools:
“As we continue to test new Generative AI ads creation tools in Ads Manager, advertisers running campaigns that qualify as ads for Housing, Employment or Credit or Social Issues, Elections, or Politics, or related to Health, Pharmaceuticals or Financial Services aren’t currently permitted to use these Generative AI features. We believe this approach will allow us to better understand potential risks and build the right safeguards for the use of Generative AI in ads that relate to potentially sensitive topics in regulated industries.”
As reported by Reuters, Meta’s implementing the new restrictions to ensure that it has adequate measures in place to address potential misuse of AI tools directly linked to paid promotions. X owner Elon Musk has been warning of the same, that the coming wave of generative AI will also bring a new era for bot peddlers and misinformation, via increasingly convincing video and image examples that are becoming easier than ever to fake.
Meta also recently implemented restrictions to stop users from creating images of public figures via its new AI assistant tools.
It’s a key concern, especially in political campaigns, where a timely push, whether real or not, can significantly sway voters heading to the polls. Various elections have been decided in the final days of a campaign, and AI-generated fakes do indeed look set to become a key weapon in the next stage of political manipulation, whether we’re ready for them or not.
The challenge, then, is that many of these pushes are likely to be uncovered in retrospect, which could mean that they’ve already had the desired impact before the truth is revealed.
Ideally, we can get ahead of that, which is why Meta banning the use of AI in political ads makes a lot of sense.