Use + Remix

Fake images and social media posts created by bots could erode and undermine trust in democratic systems and communities, as the US is finding out.

An AI-generated picture of Donald Trump hugging a duck and cat. : AI-Generated Image Public Domain An AI-generated picture of Donald Trump hugging a duck and cat. : AI-Generated Image Public Domain

Fake images and social media posts created by bots could erode and undermine trust in democratic systems and communities, as the US is finding out.

With less than a week until the most divisive and hotly contested US presidential election in recent memory, a new threat has emerged.

Artificial intelligence is contributing to the flood of misinformation around the candidates and the issues.

While research around this area is limited, confected images to “expose” scandals or laud candidates for their heroic, if fictional, acts can raise public fears of information manipulation just when the public needs reliable information the most.

The speed at which false information surrounding the recent hurricanes spread in the United States shows just how alarmingly rapid these misinformation frenzies can take hold, with Donald Trump repeating false claims that hurricane relief funds had been spent on “housing for illegal migrants.”

Merely repeating false facts generated online is just one way. There are many other ways these tools are being used to flood our information spaces without our knowledge.

Seemingly authentic posts are filling social media with apparent concern or consensus that target specific candidates. These posts can be generated and posted at incredible speeds directly or through bots.

At the same time, many of us are taking the tool into our own hands to create funny memes that lampoon the other side or assist us in quickly writing persuasive texts.

Increasingly, people are using generative AI as a more accessible search engine that goes beyond giving results, regardless of accuracy.

If there is a risk to AI, it appears to lead back to the humans that use it.

In our hands or others’, the tools are pressing into our election conversations — and potentially our election decisions — in ways that invite risks for how our societies function.

The threat of others’ trust

AI interference in elections points to problems like deepfakes or bots, and each of these certainly carries worrying implications.

There is a widespread concern that the recent election in Slovakia — which saw the elevation of a Putin-friendly politician — was tainted by a deepfake falsely depicting his opposition tampering with voting processes.

Bots have also been detected automatically generating political content across X for several years. Potentially further contributing to why the trustworthiness of the news we get from social media platforms is ranked the lowest.

This presents a contradiction. Despite the low levels of trust, our attention to social media news has grown, and it is now the most common source of news for millions worldwide.

Whatever value we invest in the trustworthiness of news media, untrustworthiness demonstrably isn’t a deal breaker.

This is reflected in our approaches to misinformation and AI.

Research has indicated that many people will knowingly spread misinformation if it aligns with their views or paints them in a particular light.

While claims of “fake news” might raise images of deeply invested conspiracy theorists, its spread and reception is often down to simple inattention or its presence on a preferred news source.

Our views on AI — and its related potential for misinformation — appear to follow similar paths.

Would-be voters are often interested in the opportunities it affords their political side while raising concerns about how it empowers their opponents. Memes use image generators to lionise heroes or diminish the opposition.

Part of this inconsistency between embracing and decrying AI and misinformation could be traced back to what some researchers call the third person effect: “I would never fall for it, but I am worried other people will.”

However, when engaged directly, people self-report some difficulty in identifying the authenticity and accuracy of content, with some platforms presenting a more significant challenge than others.

The challenge this brings to elections is that we increasingly live in a ‘mediatised’ society — more and more of our understanding of the world is grounded in the media we see rather than in our interpersonal conversations or lived experiences.

This is a kind of superpower of the modern age: we can now ‘know’ the status of countries we have never seen, take sides and even take part in arguments between people we’ve never met, even imagine the world from space.

The cost of this is that the public is increasingly isolated from communities and families. Even the internet is fragmenting.

These forces collide in dangerous ways with AI and its potential for misinformation.

Part of the reason for the polarising divide in countries such as the US is misperception — a misunderstanding of what the other side does or supports.

This misperception engenders dislike and even disgust — and ultimately a societally concerning distrust, regardless of how similar policy positions actually are.

Deepfake and other AI content can erode our trust in society and each other.

Shining the light on AI and misinformation

What is absolutely crucial is that we rebuild our systems of trust so we can work together as a society. The internet brims with limitless information and misinformation, and we need a way to navigate it to keep our society functioning.

The challenge is in how this reality reaches us and whether we can identify the difference.

New ways to combat this include AI literacy. This means knowing it is out there, knowing what it does, how to use it, and its ethical risks.

This can help inform our information gathering and spreading practices — and who we choose to let into our information orbit. This does create an imposition for a time-starved public, however, and they never asked to be misled in the first place.

Some are proposing that the social media platforms or the AI creators themselves need to be held responsible for ensuring their product mitigates harms, and these could well be having some effects.

Already, AI companies are raising concerns of accuracy as a crucial selling point for their products, and bills are being proposed to govern their development — with mixed success.

The most considerable and most crucial concern is the need to collectively consider how we can empower and invest trust in the public institutions that we rely on to provide us with salient and valuable information.

This could include re-investing in sustainable and diverse media or building systems that raise our trust in our democratic governance systems while also ensuring they are accountable for being trustworthy.

With greater awareness of AI and more vigorous approaches to information integrity, we could well find that AI has become a boon — but it is unlikely to happen by accident.

Dr Timothy Koskie is a post-doctoral associate for the Mediated Trust project at the Media and Communications School at the University of Sydney.

Originally published under Creative Commons by 360info™.

Are you a journalist? Sign up for our wire service