Scroll to read more

After developing the system in partnership with the Department of Justice for more than a year, Meta has now released the first stage of live deployment for its Variance Reduction System (VRS) for housing ads, which is designed to reduce bias, and increase the equitable distribution of ads across Meta’s apps.

As explained in this overview, Meta’s VRS system measures the actual audience reach for each ad, and ensures a broader spread of exposure, based on various audience factors.

As explained by Meta:

“The VRS uses new machine learning technology in ad delivery so that the actual audience that sees an ad more closely reflects the eligible target audience for that ad. After the ad has been shown to a large enough group of people, the VRS measures aggregate demographic distribution of those who have seen the ad to understand how that audience compares with the demographic distribution of the eligible target audience selected by the advertiser.”

In essence, the system ensures that housing ads are not being limited to certain ethnic or socioeconomic groups by Meta’s ad targeting AI process, by measuring the overall ad exposure, and matching that against audience data based on US Census statistics on race and ethnicity.

“This method is built with added privacy enhancements including differential privacy, a technique that can help protect against re-identification of individuals within aggregated datasets.”

The system is the latest in Meta’s efforts to address regulatory concerns about its ad targeting tools, and the potential for exclusion based on the thousands of targeting factors available in the app. 

Meta has come under heavy scrutiny over its variable ad targeting in the past.

Back in 2017, an investigation by ProPublica found that advertisers were able to create Facebook ads that excluded people based on sensitive factors – restrictions that are prohibited by federal law in housing and employment.

As noted in the above video, Meta then removed various ad targeting options for housing, employment and credit ads in 2019. The VRS process is the next stage in its ongoing work on this front. And while its ad targeting systems, overall, have also been impacted by changes to data collection and tracking, the new measures will provide more assurance that its ads are not facilitating indirect profiling of certain groups, by broadening exposure to all audience segments.

Ensuring fairness in AI is a complex area, especially when you consider that the majority of inputs that are used to power AI models often already include implicit bias to some degree. Every platform is working to mitigate this, and build new weightings to filter out unconscious and unintended impacts. But it’ll take time to measure the cause and effect of each update, with Meta now monitoring the first stage of this roll-out, and gleaning feedback on performance to update its VRS models.

There’s a lot to it – you can read more about the technical considerations at play in this whitepaper, which outlines how the VRS system works, and the various elements it’s working to balance.

The field of fairness in machine learning is a dynamic and evolving one, and the changes described in this paper represent several years of progress in consultation with a broad array of stakeholders. Much of this work is unprecedented in the advertising industry and represents a significant technological advancement for how machine learning is responsibly used to deliver personalized ads. We are excited to pioneer this effort, and we hope that by sharing key context and details about how we are tackling this multidimensional challenge that other AI and digital advertising practitioners can more easily adopt and take similar steps to help prevent discrimination and avoid amplifying societal biases whose impact extends far beyond any one platform.”

Meta says that the initial stage of the roll-out of VRS will focus on housing ads in the US, with credit and employment ads to follow.