Use + Remix

Defunding the disinformation money machine

The advertising industry is one of the driving funders of disinformation platforms, often unwittingly. (Eleni Afiontzi, Unsplash) : Eleni Afiontzi, Unsplash The advertising industry is one of the driving funders of disinformation platforms, often unwittingly. (Eleni Afiontzi, Unsplash) : Eleni Afiontzi, Unsplash

Disinformation is a profitable business, and one of the most effective ways to slow its spread is to take away the advertising money unwittingly funding it.

By Daniel J. Rogers, New York University

Disinformation is peddled for a variety of motives. The Russian government employs disinformation as part of their expansionist geopolitical aims. The Chinese Communist Party uses it to further entrench their political authority and grow their global influence. Professional influence operators do so as guns for hire on behalf of powerful industries, ranging from oil to tobacco companies. Trolls on 4chan often do it in service of pure nihilism.

But by far the most common and compelling motivation to spread online disinformation is profit.

It has led the world to a veritable disinformation crisis. The biggest global companies are those who provide the machinery to capture and monetise audience attention at scale. Today’s internet is powered by businesses that capture and profit from “clicks and eyeballs”.

Who provides the money for this machine? Sometimes, audiences are monetised through merchandise sales or solicitation of direct donations. Most often, the cash comes from advertising. Advertisers subsidise the web to the tune of over US$400 billion a year in digital ad spend. They pay into a complex ecosystem dominated by two outsized ad tech platforms — Google and Facebook (which each take a sizable commission) — and their money ultimately makes its way to content creators and publishers on the open web.

 

How much those publishers and content creators make is controlled mainly by the quantity and spending power of the audience they capture. And so our modern information ecosystem has become a race for eyeballs: a race won by the most salacious, infuriating, divisive, and most importantly, engaging content.

On the moneymaking side of this transaction, everyone wins. The publishers who capture an audience’s attention make money, as do the platforms that take a commission on every ad that gets placed. Nearly a quarter billion US dollars a year is estimated to go into subsidising online disinformation.

Those on the other side of this transaction lose out. Advertisers that pay money into this system end up with their brands appearing alongside unsuitable content, harming their reputation and costing them money. It impacts what people choose to buy: about 51 percent of the 1500 millennials and Gen Xers surveyed in 2020 were less likely to purchase from a company with an “unsafe” brand placement and were three times less likely to recommend that brand to others.

The Global Disinformation Index (GDI) is a non-for-profit seeking to balance out that equation. Advertisers were missing data on where on the web disinformation was occurring. With that information they could avoid those platforms in their automated ad campaigns, safeguarding their brand and redirecting funds away from disinformation peddlers. A transparent, independent, neutral index of so-called “disinformation risk” on the open web was needed.

The aim presented a technical challenge: traffic on the internet is approximately distributed according to a power law, meaning a small number of high-profile websites receive a sizable fraction of the traffic. But the distribution also has a “long tail”. This means there are a large number of websites that — when taken together — also capture a large amount of traffic, even if each individual one doesn’t on its own.

For that reason, it was imperative that the disinformation risk ratings were built up using a hybrid of both human-powered assessments (in order to capture high-profile media with the requisite levels of nuance and fidelity) and large-scale automation (to maintain parity with the large number of “long tail” sites).

The human-powered portion of the methodology seeks to assess the journalistic integrity, and thus the disinformation risk, of publishers across over 20 different media markets to date. This methodology comports with the Journalism Trust Initiative, an international standards effort put forth by Reporters Without Borders. GDI assesses content and operational policies, looking for conflicts of interest, prior evidence of disinformation risk, and lapses in journalistic standards as part of its assessments.

Meanwhile, GDI’s automated systems crawl hundreds of thousands of sites, assessing millions of pieces of new content each week, identifying ones that peddle the various adversarial narratives du jour. When a particular site meets a minimum threshold, it gets flagged for additional human review.

Ultimately, all of these processes feed into data sets that GDI then provides to advertisers and digital ad platforms to prevent them from inadvertently buying or selling ads on sites trafficking in disinformation. This not only keeps advertisers’ brands safe, but also helps to funnel ad revenue away from disinformation and toward higher quality news.

The GDI was founded on three principles: transparency, independence, and neutrality. It is a global organisation that works in partnership with NGOs, governments, and commercial organisations around the world. It operates in more than 10 different languages and over 20 countries, and yet is only part of a larger ecosystem of organisations working in media literacy education, tech reform policy, counter-messaging, and platform trust and safety that are all contributing to the conversation, and to the ultimate goal of disrupting the scourge of online disinformation.

By some estimates, GDI has already substantially cut ad revenue to disinformation purveyors by half through partnerships with over a dozen major ad platforms. But there is still a long way to go to protect democracy and cut the funding to disinformation.

Dr. Daniel J. Rogers is Adjunct Assistant Professor at New York University’s Center for Global Affairs, and Co-Founder and Executive Director at the Global Disinformation Index. Dr. Rogers declared no conflicts of interest in relation to this article.

The Global Disinformation Index is funded by a range of governmental, philanthropic and commercial sources. All current funders are displayed on their website under the About section.

Originally published under Creative Commons by 360info™.

Editors Note: In the story “Misinformation” sent at: 14/02/2022 09:37.

This is a corrected repeat.

Authors
Daniel J. Rogers, New York University
Editor
Andrew Jaspan and Reece Hooker, 360info
Monash University has established and is proud to host the global headquarters for 360info. Monash University is also the host of the Asia-Pacific Hub.
Supporting partners: