Use + Remix

The spread of false information is jeopardising global health and security, prompting the need to find quick, cost-effective interventions.

The spread of misinformation and disinformation on social media has become increasingly pervasive. : Unsplash: Robin Worrall Unsplash Licence The spread of misinformation and disinformation on social media has become increasingly pervasive. : Unsplash: Robin Worrall Unsplash Licence

The spread of false information is jeopardising global health and security, prompting the need to find quick, cost-effective interventions.

The spread of misinformation and disinformation is the most severe short-term risk facing the world, according to the World Economic Forum’s Global Risks Report 2024.

It has been allegedly weaponised by Israel, accused of using fake social media accounts to lobby US lawmakers for more military funding. Misinformation and disinformation also fuelled the start of the Russo-Ukrainian war and remains a potent tool used by Moscow.

During the COVID-19 pandemic, misinformation about vaccines contributed to widespread refusal.

Between May 30, 2021, and September 3, 2022, the United States recorded over 230,000 avoidable deaths due to non-vaccination — more than the combined US military combat fatalities in World War One, the Korean War, the Vietnam War, the Gulf War and the “war on terror”.

Misinformation and disinformation is increasingly pervasive, but the issue can be tackled cost-effectively without compromising freedom of speech.

The difference between misinformation and disinformation is intent.

The term ‘disinformation’ refers to information that was deliberately designed to deceive, whereas ‘misinformation’ is agnostic as to the intent of the sender.

A recent poll revealed that 71 percent of the 3,000 US adults surveyed support limiting false information on social media, particularly regarding elections.

However, the most popular method for doing this — fact-checking — is too slow and costly to be effective. It requires certainty before taking actions such as removing or making less visible a social media post deemed false.

But there is another way this problem can be addressed without infringing on freedom of speech or turning social media companies into the arbiters of truth.

Responsibility can be shifted to social media users through a process called ‘self-certification’.

When a user attempts to share a post, its likely veracity is assessed — either by volunteers, or more realistically, by AI agents such as large language models.

If the post is flagged as potentially false, the user is asked to certify that they believe it to be true before being allowed to share it.

If they choose to self-certify the post, it is shared immediately.

This system allows users to share any post they believe to be true and only requires certification for posts with questionable accuracy. Obviously true posts do not need to be certified.

Most people are fundamentally honest and don’t want to spread false information. Or at least they are not willing to lie to do so.

This simple nudge — asking users to certify the truthfulness of potentially false posts — has proven highly effective.

A recent study found that self-certification reduced the sharing of false information by about half.

Without the intervention, 49 per cent of the false posts were shared but with the intervention, only 25 per cent were shared.

Further analysis showed that users were willing to share posts they had indicated were false, but most were not willing to lie and indicate that a post they believed was false was true, just so that they would be allowed to share it.

This is why self-certification was so effective.

Since users are not stopped from sharing content they think is true, the occasional false prompt is acceptable. This allows for automation, leading to faster and cheaper solutions.

Harmful content isn’t just limited to outright lies: it includes exaggerated information, content taken out of context and hate speech.

Self-certification could be adapted to address these issues.

Content that is potentially exaggerated, taken out of context or is otherwise inappropriate can be flagged to the user, with the user being required to certify that it is not exaggerated, taken out of context or is otherwise inappropriate before being allowed to share it.

Despite growing concerns over misinformation and disinformation, self-certification offers a straightforward and cost-effective way to curb its spread.

It leverages existing technology, empowers users, and preserves freedom of speech by ensuring that people can share any information they genuinely believe to be true.

Dr Piers Howe is an associate professor in the Melbourne School of Psychological Sciences at the University of Melbourne, Australia. His research focuses on reducing the spread and mitigating the harms of misinformation and disinformation, so as to protect democracy.

Originally published under Creative Commons by 360info™.

Are you a journalist? Sign up for our wire service