Use + Remix

When governments use AI to predict what the people want

Using artificial intelligence to predict the behaviour of communities can veer close to surveillance. : Dorieo, Wikimedia CC 4.0 Using artificial intelligence to predict the behaviour of communities can veer close to surveillance. : Dorieo, Wikimedia CC 4.0

Governments can and do use artificial intelligence to direct their citizens and their policy. But are we prepared for how far it could go?

Governments have access to large amounts of data which they can — and often do — use to analyse and predict their citizens’ behaviours using artificial intelligence (AI) strategies.

However while AI can help policy-makers by delivering highly accurate predictions, identifying trends and patterns, predicting complex associations and improving profitability, it may also introduce risks to citizen’s privacy and security and threaten free decision-making in society.

Researchers from three Universities in Spain explored these risks in a study which surveyed government officials about their institution’s use of artificial intelligence. One councillor said that AI had helped his town predict outcomes to assist in making better decisions during the recent COVID-19 pandemic. “The use of artificial intelligence to predict possible infections and deaths has been used with statistical models. These models have helped us to both improve health care and the movement of people in cities when a lockdown has been necessary,” the councillor said. However, the same official also noted: “the use of applications to track the location of user devices, although always anonymously, has highlighted the need to regulate the use of both artificial intelligence technology and other similar technologies.”

Another Spanish politician who was interviewed said: “We use artificial intelligence to predict possible criminal acts in the city. When artificial intelligence and our analyses tell us that there is a neighbourhood where serious crimes, such as murder, can be committed, we increase the number of police patrols in those neighbourhoods.”

The recent exponential growth in the use of AI has seen the new field of behavioural data science emerge, which combines techniques from behavioural sciences, psychology, sociology, economics, and business, and uses the processes from computer science, data-centric engineering, statistical models, information science and or mathematics to understand and predict human behaviour using AI.

While this predictive power can be deployed to better design and implement policy, as the first councillor noted, privacy concerns are growing. As more data is obtained from citizens, predictions may soon reach similar levels of effectiveness as that of observations, raising concerns around state surveillance. Governments with this kind of intelligence could risk breaching privacy and impeding free decision-making in society.

Illicit use of such technology can be applied to modify citizens’ behaviour, including influencing election outcomes. For example, US Facebook users’ behavioural data was analysed using behavioural prediction algorithms developed by Cambridge Analytica, and employed to modify the election results in the 2016 US presidential campaign between Donald Trump and Hilary Clinton.

Many questions remain around the risks to citizens’ privacy posed by government use of AI and behavioural data science. These include: the ethics of collecting and analysing data generated non-intentionally by citizens; how the outputs obtained by government from such data analysis should be explained to citizens; and whether (and in what ways) such analysis may violate people’s privacy.

Governments can better meet the UN’s Sustainable Development Goal of effective, accountable and responsive institutions if they use AI to improve services to citizens and society, and adopt ethical principles and values to ensure the privacy of citizens. Solutions could include developing legislation related to AI and behavioural data science to limit potential unethical uses and avoid the non-legitimate or non-lawful use of this technology. Effective government practice and policy will help citizens have more trust in the use of AI, behavioural data science and mass analysis of collective behaviour and intelligence.In today’s global culture where the internet is the main tool of communication, data and decisions based on behavioural analysis have become essential for public actors, however with legislation often one step behind technology, many societies are currently under-prepared for this inevitable future.

Jose Ramon Saura is associate professor of Digital Marketing, Rey Juan Carlos University in Spain. His research explores theoretical and practical insights within digital marketing and user generated content (UGC), focusing on data mining, knowledge discovery and information sciences. He has worked with a wide range of companies including Google, Deloitte, L’Oréal, Telefónica or MRM/McCann. He declares no conflict of interest.

Originally published under Creative Commons by 360info™.

Editors Note: Jose Saura, Rey Juan Carlos University

Authors
José Ramón Saura
Jose Ramon Saura research has focused on the theoretical and practical insights of various aspects of Digital Marketing and User Generated Content (UGC), with a specific focus around three major research approaches applied to digital business and marketing: data mining, knowledge discovery and information sciences. Jose Ramon Saura held positions and made consultancy at a number of other companies including Google, Deloitte, L'Oréal, Telefónica or MRM//McCann, among others.

Editor
Sara Phillips
Sara Phillips, Senior Commissioning Editor, 360info Asia-Pacific

Special Report Articles
Monash University has established and is proud to host the global headquarters for 360info. Monash University is also the host of the Asia-Pacific Hub.
Supporting partners: