There are high expectations that AI can help solve complex issues around sustainability and climate but we must beware the pitfalls.
There are high expectations that AI can help solve complex issues around sustainability and climate but we must beware the pitfalls.
Enlisting artificial intelligence to help solve complex issues surrounding sustainability and climate change is creating plenty of hype.
With the Australian Government AI Expert Group due to wrap up its activities on June 30, there is a need to understand whether the reality of using AI is all it’s cracked up to be.
Governments and organisations are creating AI strategies, pushing for increased AI adoption, and setting high expectations for what AI can do. But recent research shows that most expectations to date around AI aren’t being met.
There is a need to better understand the realistic potential of AI in addressing sustainability challenges.
Our research suggests four ways to bridge these gaps between expectations and reality at the AI-policy interface.
While there is immense hope in the promise of AI for addressing complex sustainability challenges, this research shows there’s a significant gap between expectations and real-world applications.
AI is not yet at the technological tipping point of being able to solve our sustainable development policy challenges and the multitude of human factors at play need to be identified and addressed as much as the technological potential.
A key issue is the variation in understanding AI’s potential, compared to where the technology is at currently.
This has led to a wider discussion about the possible applications of AI. As a result there is a disconnect between what AI can do now and social and environmental needs.
Academic researchers in AI are highly optimistic about its promise in addressing sustainability policy challenges but are often not across the intricacies of policy development.
This means they may not be able to understand if and how AI technologies could support decision-making.
Many also recognise the myriad ways AI could exacerbate sustainability issues if not handled with due consideration.
Governments worldwide are creating national AI strategies and policy frameworks. The motivation for doing so comes in part because they see AI as beneficial in decision-making and for addressing complex public policy issues, such as sustainability.
But it also stems from a fear of being left behind in the next “space race”.
Despite high expectations for government use of AI, there is very limited evidence of actual implementation. Related use cases that have made it into the public sphere are often entrenched in scandal.
For instance the (mis)use of algorithms to detect and (falsely) accuse people of welfare fraud in Australia (Robodebt) and the Netherlands (SyRI). Or the widespread use of Facial Recognition Technologies (FRT) in policing and security contexts across the United States and the UK, leading to accusations of bias and racialisation, as well as other rights abuses.
Consultants and think tanks play a major role in shaping the narrative around the promise of AI, and push for accelerating adoption to avoid getting stuck in “pilot purgatory” with small scale use that doesn’t result in impact.
However, while “scaling up“ could result in greater benefits and positive outcomes, real world policy problems are complicated and ensuring a tool “works” across multiple contexts takes time.
There are four practical recommendations for translating AI’s promise into practice:
- Document and Evaluate: It’s crucial to document and rigorously evaluate AI applications in real-world settings to understand their true impact and effectiveness.
- Focus on Mature Technologies: Prioritise AI tools that are proven and reliable rather than speculative, experimental technologies.
- Problem-Centric Approach: Start by clearly defining the policy problem, then select the most suitable AI technology to address it, rather than assuming AI is always the answer.
- Adaptability to Complexity: AI solutions must be flexible and continuously evaluated to handle the dynamic and multifaceted nature of sustainability issues.
The journey from AI’s promise to practical implementation is still unfolding. Bridging the gap between AI’s potential and practical use in policymaking is crucial for harnessing its benefits while mitigating risks.
By focusing on real-world evidence and thoughtful application, we can better harness that potential without falling for the hype.
Dr Mitzi Bolton  is a Senior Research Fellow at the Monash Sustainable Development Institute and Academic Advisor to the ANZSOG National Regulator Community of Practice (CoP). Her research is driven by the 12 years in the public sector, in which she held an array of leadership and policy roles. She is particularly interested in how to connect science and policy to assist the transition to more sustainable futures. Her research explores the question of if and how AI might be applied in that transition.
Ruby O’Connor is a PhD Candidate at Monash University, based in the Department of Politics.  Ruby also has co-supervision from the Monash Sustainable Development Institute and the Emerging Technologies Lab, and funding through the Monash Data Futures Institute. Her interdisciplinary research examines government use of Artificial Intelligence and the effects of policy and technology narratives on society.Â
Tom Chan is a PhD candidate in the School of Cybernetics, College of Engineering, Computing & Cybernetics at the Australian National University and former consultant and senior public servant in the economic analysis of law and public policy. Tom is exploring issues around the emergence of greener, smarter, cyber-physical buildings and the role of transition intermediaries in sensing, shaping, steering and scaling these critical systems for people, planet and profit.
Dr Alexander Saeri  applies social and behavioural science principles to address some of the world’s most pressing problems, including the governance of artificial intelligence. Alexander is currently collaborating with MIT FutureTech and The University of Queensland to synthesise the research evidence on risks associated with artificial intelligence, and assessing leading AI developers and other global organisations’ responses and action to address these risks.  Alex holds  an adjunct role at BehaviourWorks Australia.
Originally published under Creative Commons by 360info™.