Use + Remix

Law

After legislative stumbles on fake news, the Malaysian government needs to take on an even more complex challenge.

The emergence of generative AI brings a new level of complexity to the challenge of fighting false information in the media, which Malaysia has grappled with for years. : Mojahid Mottakin (Unsplash) Unsplash License The emergence of generative AI brings a new level of complexity to the challenge of fighting false information in the media, which Malaysia has grappled with for years. : Mojahid Mottakin (Unsplash) Unsplash License

After legislative stumbles on fake news, the Malaysian government needs to take on an even more complex challenge.

The rapid advancement of generative AI has sparked concerns about its potential to fuel the spread of misinformation.

Since the launch of ChatGPT by OpenAI in November 2022, there has been a slew of generative AI platforms, such as ChatGPT-4 and other similar tools such as Google’s Bard and, most recently, Grok by Elon Musk.

Generative AI‘s ability to create convincing content blurs the line between human and machine-generated, highlighting the importance of restoring trust in news production.

Current global discussions about AI governance and standards for trustworthy and responsible AI intersect with concerns about preserving journalistic integrity and ensuring the reliability of information accessible to the public.

These powerful technologies bring a new level of complexity to the challenge of fighting false information in the media, which Malaysia has grappled with for years.

Malaysia is considering regulating AI applications and platforms, covering crucial aspects such as data privacy and public awareness of AI use. The legislation wouldn’t hinder the progress of AI technology. It’s about balancing risk management and fostering innovation to ensure AI’s continued positive impact on the economy and society.

In 2018, the Malaysian government enacted the Anti-Fake News Act 2018 to combat the rise of fake news, but critics said the act was designed to stifle dissent ahead of the 2018 general election. The law was repealed after a year.

Then, during the height of the COVID-19 pandemic, a new law to tackle fake news was introduced. The Emergency (Essential Powers) (No 2) Ordinance Bill came into force in March 2021 but was revoked in the same year. Its stated target was to counter misinformation about COVID and emergency lockdown orders.

Malaysia introduced the National Artificial Intelligence Roadmap 2021-2025, emphasising AI governance. The first iteration of principles for responsible AI includes seven points, focusing on fairness, reliability, safety and control, privacy and security, inclusiveness, the pursuit of human benefits and happiness, accountability, and transparency. The document suggests continuous updates, aligning with the Federal Constitution and Rukun Negara (National Principles).

The aim of these principles is for developers and deployers of AI tools to use them when, for example, training the system with large data sets. Their intent is to avoid bias and ensure that predictive outcomes of AI systems do not clash with the values in the constitution and the Rukun Negara.

Plans to work on AI governance mechanisms will begin in 2024, particularly on the responsible AI ethical framework, addressing generative AI tools in news production. These guidelines will extend to various sectors such as government agencies, traditional media, online portals and social media, emphasising education and awareness of AI’s ethical implications in journalism.

Open AI’s DALL∙E and Midjourney use AI to generate art and visuals. These models are capable of producing text and images in developing news content and information. This is not to say that news organisations have not used AI tools before generative AI models and chatbots. Take, for instance, the Associated Press using AI tools for news gathering, news production and news distribution.

The JournalismAI Report published in 2019 highlighted ethical issues around using AI tools in journalism. However, several news outlets such as Wired and The Guardian have published their own guidelines on using AI tools in producing their content.

They cite a variety of risks that may have harmful consequences for readers — including the generation of inaccurate, fabricated, outdated or offensive content. These risks can be exacerbated when generative AI tools are used by non-media entities. There have been concerns in Europe and the US about the use of deepfakes to threaten democratic processes.

The risk of misinformation and its consequences on society also raises the question of accountability. The need to put up guardrails takes centre stage in the debate, notably when generative AI models approach a more human-like level, such as Grok, which Musk claims can respond with humour.

At this early stage of developing AI governance, involving developers, governments and regulators, media entities and civil society is required. Developers of tools such as OpenAI are aware of the potential for ‘disinformation’, investigating how large language models could be misused for disinformation purposes and looking at steps to mitigate the risks.

Unfortunately, there have been cases of abuse, such as an anonymous user mass-producing AI-generated disinformation.

Governments and industry leaders have a role to play in addressing the risks of generative AI. The AI Safety Summit hosted by the UK government in November recognised the need to test the safety of AI tools, but participants also emphasised that over-regulation may stifle AI’s growth.

Significantly, 28 nations, including the UK, China and the US, agreed to the Bletchley Declaration on AI Safety, with both the UK and the US announcing the establishment of AI Safety Institutes. The signatories shared common views of the transformative potential of AI while noting its ability to amplify threats, such as disinformation, hence the need for AI to be designed, developed and deployed based on a set of standards.

Generative AI tools producing misinformation leading to harm — such as discrimination as a result of bias, or viewed as harmful speech affecting groups based on race, ethnicity, gender or disability — could face the ire of the impending EU AI Act, which would be the first legislation of its kind.

As policymakers and legislators consider the next step, there is some disagreement on whether generative AI will lead to an onslaught of misinformation. Researchers argue that “current concerns about the effects of generative AI on the misinformation landscape are overblown”.

While contrary views emerge on this subject, there is a growing body of guidance for journalists when using AI in all aspects of their work, including minimising the risk of misinformation. One example is the World Association of News Publishers’ Global Principles for Artificial Intelligence and the most recent publication by Reporters without Borders’ Paris Charter on AI and Journalism.

As journalists train themselves in using AI tools responsibly, one value stands firm — the respect for freedom of speech and expression and the right to information.

The trustworthiness of news is the cornerstone of these rights. To ensure this is the case, mainstream media outlets and journalists in Malaysia must consider new guardrails with the emergence of a new era.

Jaspal Kaur Sadhu Singh is a senior lecturer in law at Canterbury Christ Church University, UK. She specialises in the intersection of technology law with a focus on freedom of expression, and AI law and ethics.

Originally published under Creative Commons by 360info™.

Are you a journalist? Sign up for our wire service