Brain-machine interfaces can decipher thoughts and help give some coma survivors a voice. But first we must ensure they’re responsive.
Brain-machine interfaces can decipher thoughts and help give some coma survivors a voice. But first we must ensure they’re responsive.
Consciousness is not binary; it’s gradual, with many dimensions. While that might seem like a consideration to be pondered by philosophers, it can have major real-world implications.
In 2009, a 21-year-old woman was assessed by the team of Dr Laureys at the University Hospital of Liège in Belgium. She was considered comatose on arrival. But a recording of her brain’s electrical activity showed she could respond to her name in a list of random names. Her brain activity responded to simple questions — she was hearing, understanding and doing what she was asked to do. In other cases, she would have been deemed comatose or vegetative but her brain showed she was conscious, yet suffering from a total locked-in syndrome with no means of speaking or moving. The moment was significant as clinicians were discussing end-of-life decisions with her family at the time. She subsequently recovered and was eventually able to control a wheelchair.
There has been a historical misconception that consciousness is either ‘all or nothing’ — that comatose or vegetative patients are completely unaware. Clinicians sometimes fail to recognise minimal signs of consciousness after a coma. And often it’s the family who first realises there’s more going on in the brain of a patient. Consciousness is possible, despite not visibly appearing to be. Patients who exist in the shades of consciousness receive little attention — a literally silent epidemic.
Much as we have progressed from using leeches in medicine, understanding consciousness has developed since the early days of ‘squeeze my hand’. Clinicians can now look directly at the brain responses of patients instead of their motor responses, limiting the chances of a misdiagnosis. Coma survivors or those who are paralysed can still have an active brain. And these machines can help decipher brain waves to help with communication.
An EEG (electroencephalogram) — a net of wires and electrodes — placed on the head of a person can measure the electrical activity of their brain and show what happens when you ask them questions. A machine that decodes that information is called a brain-machine interface.
It’s non-invasive, meaning no surgery is involved, and it’s portable. By decoding electrical activity into a yes or no response, brain-machine interfaces can give a voice to patients who have no other way of communicating. It allows the possibility for patients to express their thoughts and wishes, increase their quality of life and have some control over their environment.
Once doctors know a patient is aware and hearing, they can ask questions or give commands through images, touch and audio. Algorithms, artificial intelligence and classifiers used by the brain-machine interface can turn the electrical activity from the brain into functional information.
The challenge is enabling brain-machine interfaces to be used more widely. There is enough scientific evidence to show the concept works, but it’s now a matter of making it accessible and affordable around the world. It currently depends on what industry partners and engineering companies can offer to create the technology at scale.
Ethical challenges will arise as the technology becomes more widely used. Studies show we should be very careful when addressing, assessing or protecting the quality of life of those with severe motor deficits or who are minimally conscious. Imagine a patient who expresses the need for rehab, is in pain, or asking to die because there is no quality of life.
There are currently no parameters in defining informed requests for those who are not verbally or physically responsive. We still need to define the level of competency those with brain damage have.
Messages from the brain-machine interface will need to be representative of the patient’s true wants and needs. Patients’ families, physicians, and engineers will need to trust that the machine can accurately decode information and be able to make sound judgments. Families and medical staff will also still need to be there as caregivers. These are big challenges as medicine becomes hyper-specialised and technological — we are still dealing with human beings and their emotional needs, after all.
Brain-machine interface technology can add a lot of value to modern neuroimaging, and understanding consciousness. But we are still very far from understanding thoughts, perceptions and emotions. In the meantime, the research and scientific community can benefit from remaining humble. False hope for patients’ families is just as bad as false despair.
Dr Steven Laureys is an award-winning neurologist and neuroscientist, recognised worldwide as a leading clinician and researcher in the field of the neurology of consciousness, recovery after severe brain injury and concussion. He is head of the Centre Cerveau (Brain Clinic) at the Liège University Hospital in Belgium, founder of the ‘Coma Science Group’, director of the GIGA Consciousness Research Unit at Liège University & Invited Professor at CERVO Brain Centre, Laval University, Canada. www.drstevenlaureys.com
The research featured in this article is supported by grants from the Belgian National Fund for Scientific Research (FRS-FNRS), Human Brain Project, National Natural Science Foundation of China), Mind Science Foundation, European Foundation of Biomedical Research FERB Onlus, BIAL Foundation, European Space Agency, fund Generet of King Baudouin Foundation and Mind Care International Foundation.
The author declares no conflict of interest.
Originally published under Creative Commons by 360info™.