Use + Remix

Law

A partnership with Microsoft will trial the use of AI within the Australian Public Service, even while safety policies are still being developed.

Australia has a set of voluntary AI ethics principles, but no AI-specific laws. : Caroline Jones via Flickr https://flic.kr/p/kQTyC8 CC-BY-2.0 Australia has a set of voluntary AI ethics principles, but no AI-specific laws. : Caroline Jones via Flickr https://flic.kr/p/kQTyC8 CC-BY-2.0

A partnership with Microsoft will trial the use of AI within the Australian Public Service, even while safety policies are still being developed.

The Australian Public Service will become one of the first in the world to use AI when it trials Microsoft’s 365 Copilot software next year.

When Prime Minister Anthony Albanese announced the six-month trial in November, he said the move would “enhance” the work of its public servants without compromising on safety.

With the public service about to get retooled with advanced AI, assuring Australians that the “enhancement” is worthwhile will require identifying and managing the safety risks.

Australia might claim to be at the forefront of integrating AI tools in the public sector, but it has no binding legal or policy framework around their use. Other governments do, which could help Australian policymakers navigate this new territory.

The launch of OpenAI’s ChatGPT just over a year ago – the first such tool released for free to the public – caused alarm in many circles. ChatGPT uses a type of machine learning known as a large language model (LLM). Large language models are ‘trained’ with massive data inputs, including books, articles and websites, to respond to natural language queries. What goes in to train the machine determines what will come out in response to those queries.

One reason for the alarm over ChatGPT is that user prompts are also used to further train the large language model. This information lives in OpenAI’s servers, where others – including malicious actors – could potentially retrieve the data.

This design problem has made businesses worry that employees might unintentionally leak sensitive company data or confidential information by interacting with ChatGPT. Such a leak has already happened at Samsung.

The 365 Copilot software is based on machine-learning architecture similar to ChatGPT. Although Microsoft promises the software “is not trained on your organizational data” – meaning it won’t eat up and spit out users’ search queries – other AI safety concerns remain.

Perhaps most importantly, AI based on large language models has a ‘hallucination problem’ – the ability to present fictional statements confidently as fact. It means the AI tool might inject inaccuracies or misinformation when used for research. Some AI scientists say this is not a bug, but the central feature of this type of AI.

Observers have also argued that widespread reliance on generative AI including large language models might lead to further erosion of public trust in official or scientific information. There is also concern over the potential loss of critical thinking, where flawed information is accepted as ‘‘good enough’’, or gospel.

Concerns about the implications on academic integrity have already led most Australian public schools to ban ChatGPT in schools. Teachers worry that students might be relying on AI to complete take-home exams despite the ban – but it’s too hard to tell the difference.

Addressing these issues at a policy level – enforcing responsible AI – partly depends on having adequate and effective rules governing relevant actors like corporations.

In Australia, this means that AI ethics and – increasingly – soft and hard law are seen as necessary to address the risks AI could pose. Many organisations have devised AI ethics principles to guide engineers and scientists developing AI tools and systems, as well as the companies and organisations deploying them.

Before the arrival of ChatGTP, the Australian government established a set of voluntary AI ethics principles aiming to ensure AI is safe, secure and reliable. It is not known how widely these principles have been applied or how effective they have been.

Only New South Wales has a government review committee, established under its AI assurance framework, with the power to require compliance by companies selling AI products to government or contractors using AI applications.

But beyond the ethics principles, Australia has no AI-specific law similar to the EU’s AI Act or China’s detailed regulation. Instead, regulatory authorities draw on laws applicable to all existing technologies, such as the Privacy Act 1988, to govern AI, applying them on a case-by-case basis.

The appearance of highly capable and multi-purpose generative AI prompted a government review earlier this year to determine if existing measures, largely based on self-regulation by AI developers and deployers, needed to be beefed up.

In June 2023, the Albanese government said it was considering a ban on “high-risk” AI and automated decision-making in its AI governance toolkit. Europe’s AI Act, for example, provides for prohibition in the case of certain “unacceptable” uses of AI.

As the Microsoft 365 Copilot roll-out confirmed, there is no ban in Australia on the use of AI to support government functions. Instead, through its interim guidelines, the federal government has encouraged government employees to exercise caution using generative AI.

These guidelines are not mandatory and only supplementary to any guidance developed by agencies, which are still free to adopt their own. They are being reviewed by a task force assigned to develop a whole-of-government approach to AI by March 2024.

Recent international developments might help guide Australia’s approach to AI integration. Among these are the US President’s Executive Order on AI and the AI Safety Summit held in November in the UK, which the US, the EU, China and Australia all attended.

While President Joe Biden’s executive order is mainly addressed to US government agencies, it effectively creates new reporting duties for companies developing advanced AI systems. That should include large language models and generative AIs that exceed the capabilities of GPT-4, the model powering Bing. It also creates those extra reporting duties for companies or individuals that acquire computing power beyond a certain threshold.

Such a policy is an example of a government using its market power as buyer and user of AI products, as well as its administrative power as regulator of infrastructure to set out rules and requirements for companies.

The recognition of a variety of risks – including risks whose trajectories are unknown, such as the displacement of workers, and risks that are potentially catastrophic – is a notable aspect of international policy developments around AI. The approach towards the deployment of AI systems has, by necessity, become more cautious.

Biden’s executive order acknowledges the known risk of discrimination in automated decision-making caused by embedded bias in historical data used for training AI models. Corrective measures to promote fairness in AI models have been devised.

It also acknowledges the risks of fraud, disinformation and displacement of workers.

And perhaps most critically, it recognises potential threats to national security – such as the chance that advanced AI systems could help produce or access biological or chemical weapons.

Like Biden’s executive order, the AI Safety Summit recognised the risk of catastrophic harm from advanced AI and stressed the need for the government to develop capacity for safety testing for these risks.

Dr. Jayson Lamchek is a Research Fellow at Deakin University Law School and the Cyber Security Cooperative Research Centre. He researches human rights, Artificial Intelligence and cybersecurity.

The work has been supported by the Cyber Security Research Centre Limited, whose activities are partially funded by the Australian Government’s Cooperative Research Centres Programme.

Originally published under Creative Commons by 360info™.

Are you a journalist? Sign up for our wire service