Scroll to read more

This will no doubt provoke the many Facebook critics who remain hell-bent against fact-checks.

Today, Meta has announced the launch of a new AI model which is capable of automatically scanning hundreds of thousands of website citations at once, in order to check whether they truly support the corresponding claims on a page.

As you can see in this example, Meta’s new ‘Sphere’ system is able to scan a Wikipedia page, for example, for supporting links within the text. It can then match up whether the linked pages actually do reinforce the claims made in the original post.

As explained by Meta:

“[The process] calls attention to questionable citations, allowing human editors to evaluate the cases most likely to be flawed without having to sift through thousands of properly cited statements. If a citation seems irrelevant, our model will suggest a more applicable source, even pointing to the specific passage that supports the claim.”

So it’s essentially a double-checking measure for reference links, which right now is only focused on Wikipedia pages. But it could eventually be expanded to all websites and reference links, helping to ensure more accurate data sharing, with less manual work.

Meta says that it’s focusing on Wikipedia to begin with, because it’s one of the most referenced knowledge sources on the web.

“As the most popular encyclopedia of all time – with some 6.5 million articles – Wikipedia is the default first stop in the hunt for research information, background material, or an answer to that nagging question about pop culture. But sometimes that quick search for information comes with a nagging doubt: How do we know whether what we’re reading is accurate?”

Because Wikipedia is crowd-sourced, and always expanding, it’s becoming ever more difficult for the platform’s team of volunteers to keep up with public edits, which can lead to misinformation and confusion. You’ve likely seen this in high-profile news stories, where people will edit a Wikipedia page as a joke.

Which may be good for the memes, but it can also lessen the accuracy of Wikipedia’s info, and with so many people now relying on the site for information, that can be problematic.

Which is where this new system comes in – though it could also potentially hold even more value as an SEO tool, in detecting and alerting web page owners to broken or erroneous links, which could help to ensure better data mapping to contextually match up relevant queries.

An automated system able to alert site managers to problematic links could be hugely valuable in this respect, and could improve broader web information flow and accuracy, facilitating a better data ecosystem.

It also marks yet another significant advance in AI understanding, and human-like processing of information.

“To succeed at this task, an AI model must understand the claim in question, find the corresponding passage on the cited website, and predict whether the source truly verifies the statement […] Where a person would use reasoning and common sense to evaluate a citation, our system applies natural language understanding (NLU) techniques to estimate the likelihood that a claim can be inferred from a source. In NLU, a model translates human sentences (or words, phrases, or paragraphs) into complex mathematical representations. We’ve designed our tools to compare these representations in order to determine whether one statement supports or contradicts another.”

Meta says that the model, and its dataset of 134 million web pages, is now available via open-source to expand related research projects.

It’s an interesting project, with a range of potential use cases – and as more people become even more reliant on what they read on the web, any measures that can improve the accuracy of what’s displayed can only be a positive.

You can read more about Meta’s Sphere project here.