AI could mean early intervention for dementia but raises ethical issues about patient privacy.
A new case of dementia is diagnosed somewhere in the world every three seconds. Research shows as many as 40 percent of cases could be prevented or delayed, but doing so requires collecting and analysing vast amounts of data. Artificial Intelligence (AI) can support clinicians in diagnosing and predicting dementia with up to 90 percent accuracy, though care is needed to manage the ethical issues surrounding patient privacy, data security and the introduction of human bias.
Although the cause and risk factors for 60 percent of dementia cases remain unknown, a recent study has identified that 12 modifiable risk factors together account for about 40 percent of dementia cases worldwide. Earlier identification of these recognised risks and efforts to determine those which are still not known could pave the way for the prevention and improved care of dementia.
The potential benefits of AI, including reducing the risks of dementia by analysing vast amounts of health data and offering patient-tailored recommendations, are substantial. Machine learning — a division of AI that automates analysis and computations by learning from data to identify patterns — can analyse large sets of patient information to detect patterns of dementia warning signs with minimal (or without) human involvement. These include subtle signs such as difficulties in thinking and understanding daily tasks, emotional lability and memory loss which could otherwise be missed by clinicians.
AI could assess potential complications, such as delirium, psychological and behavioural symptoms of dementia, accurately predict health outcomes and support decision-making. For example, a study found machine learning algorithms could analyse how a person conducts different daily tasks and detect dementia warning signs with 95 percent accuracy, through a home automation system.
To complicate things, symptoms of depression are sometimes mistaken for symptoms of dementia. Machine-learning models could also help differentiate between the conditions, prescribe tailored treatment and offer reliable solutions in a timely manner, leading to better prospects for patients and reduced suffering and death.
But there is still limited dementia-related data available to train machine learning models. Researchers have exploited the same datasets and studies have been conducted in limited populations. Big datasets representative of local populations are still needed to train these machines so they can work at their best and provide trustworthy information. The medical principle: “First, do no harm” makes for tight regulation. For the technology to have regulatory approval and be permitted in clinical practices clinicians, who will be the ultimate users of these technologies, need to first understand the process of decision-making done by the machine.
Applying digital technologies in the mental-health field also requires caution and careful selection of analysis tools to prevent ethical challenges. These include securing patients’ data and legal assurance of data ownership — a serious concern as more than 29 million patients’ data has been compromised in breaches since 2009 in the United States. Data in the mental-health area is particularly challenging to collect due to many reasons, including stigma and privacy issues. Compromising patient data, particularly with mental illnesses, can have a significant impact on patients’ well-being.
The use of AI in making medical decisions is still new and many barriers need to be overcome before it is used widely in clinical practice. For it to reach its full capacity, wider research and more rigorous approaches are needed to grapple with the ethical issues it raises. This is an ideal time for medical professionals, stakeholders, governments as well as individuals and their families to work together and seek a balance between the benefits and risks of the new technologies.
Dr Alexander Merkin is a psychiatrist by training. He is a lecturer and a researcher at the National Institute for stroke and applied neuroscience at Auckland University of Technology, New Zealand, and at the University of Konstanz (Germany).
The author declares no conflict of interest.