“The applications for neural interfaces are as unimaginable today as the smartphone was a few decades ago.”Chris Toumazou FREng FMedSci, FRS, co-chair, the Royal Society Steering Group on Neural Interface Technologies
Brain-computer interfaces (or BCIs) are the ultimate examples of convergence. They sit at the intersection of nearly everything discussed on this blog, including biotechnology, nanotechnology, and materials science. There’s also quantum computing, which allows us to model complex environments like the human brain, and artificial intelligence, which helps us interpret what we’ve modeled. And high-bandwidth networks that would enable us to upload neurological signals into the cloud.
A new UK Royal Society report called “iHuman: blurring lines between mind and machine” is for the first time systematically exploring whether it is “right” or not to use neural interfaces – machines implanted in or worn over the body to pick up or stimulate nervous activity in the brain or other parts of the nervous system. It also sets out recommendations to ensure the ethical risks are understood and to set up a transparent, public-driven but flexible regulatory framework that will allow the UK to lead innovative technology in this field.
Many people worldwide already benefit from medical neural interface technologies. In many cases, their conditions have proved drug-resistant, and ‘electroceuticals’ have achieved what pharmaceuticals could not. Cochlear implants that substitute damaged parts of the ear provide hearing for around 400,000 people. Thousands of people with conditions such as Parkinson’s disease, dystonia, and essential tremor have been treated with deep brain stimulation. External, wearable interfaces include a range of devices that assist people who have had a stroke in their rehabilitation. People otherwise unable to communicate have been able to spell out words using brain signals alone, providing them with an invaluable means of interaction.
Other treatments are still being explored in the laboratory, such as transcranial direct current stimulation (tDCS) for depression. Others are in the early stages of medical use, such as DBS used for epilepsy or the ‘Mollii Suit’ body garment that delivers electrical stimulation to people with muscle spasticity caused by conditions like stroke or cerebral palsy. By 2040, according to the Royal Society report, conditions like Alzheimer’s disease will probably be treated using a BCI.
“In 10 years’ time this is probably going to touch millions of people.”Tim Constandinou, Co-Chair, Royal Society Steering Group on Neural Interface Technologies
What’s the best way to segment the brain-computer interface space? – As with all emerging technologies, there are multiple ways to segment the neural interface market. The method that I’ve found works best for me in discussing this topic is to segment by whether the technology is invasive (sub-segmented into recording and stimulating categories) or non-invasive (also sub-segmented into recording and stimulating categories). In the two sections that follow, I’ll list some of the technologies in each category. (For a detailed description of each technology, see the entire Royal Society report pages 30-33, using the link above.)
- ECoG – electrocorticography
- Cortical implant
- Neural dust
- Neural lace
- EEG – Electroencephalography
- MEG – Magnetoencephalography
- fMRI – Functional magnetic resonance imaging
- fNIRS – Functional near-infrared spectroscopy
- MMG – Mechanomyography
- Cochlear implants
- DBS – Deep brain stimulation
- VNS – Vagus nerve stimulation
- Retinal implants
- Vestibular implants
- FES – Functional electrical stimulation
- tDCS – Transcranial direct current stimulation
- TENS – Transcutaneous electrical nerve stimulation
- TMS – Transcranial magnetic stimulation
Where we are today – Scientists in this pioneering field make it clear that we have not even scratched the surface of the potential applications of brain-computer interfaces, which could not only help solve medical issues like dementia, epilepsy, untreatable depression, and obesity but could help people communicate without sound and even without words. We could share sensory experiences with others far away, as by sending “neural postcards,” letting others visually experience their trip or “taste” the food they are eating, by sharing the brain’s neural activity.
In its 2018 report, The Market for Neurotechnology: 2018 – 2022 (recently updated to 2020-2024 – purchase required), Neurotech reports projected that the overall worldwide market for neurotechnology products – defined as “the application of electronics and engineering to the human nervous system” – would be $8.4 billion in 2018, rising to $13.3 billion in 2022. The 2018 figure represents less than 1% of the estimated total 2018 global spending on research and development of around $2 trillion or less than 5% of all estimated life science research and development spending.
Probably the most publicized effort in the BCI space has been Neuralink, Elon Musk’s initiative that will allow paralyzed people to use computers to communicate using their thoughts alone. This could improve the quality of life for people with locked-in syndrome, for instance, where the brain is normal but is cut off from the rest of the body. However, Musk has plans that are far in advance of helping people to replace something they have lost. He foresees that artificial intelligence (AI) could advance so rapidly and so much that it forces humans to become subsidiary, something like a house pet. Installing an AI layer would be an excellent way to stay in step with AI instead, he says, and the “neural lace” interface his company is producing is an initiative designed to do just that. He plans to begin clinical trials of these neural threads next year. Here’s a brief video showing how Neuralink works:
Neuralink plans a two-gigabit-per-second wireless connection from the brain to the cloud and wants to begin human trials by the end of 2021.
The other startup receiving a ton of press coverage is Kernel, funded by Bryan Johnson, who sold his payments company to Pay Pal for $800 million in 2013. Johnson then started a venture fund called the OS Fund, which aims to “rewrite the operating systems of life” for the benefit of humanity. Johnson has courted some big names in the neuroscience field from the MIT community. Ed Boyden, a professor of biological engineering and brain and cognitive sciences at MIT, has signed on as a chief scientific advisor. And Adam Marblestone, a neuroscientist who focuses on improving data collection from the brain, is now Kernel’s chief strategy officer, having worked in the past with Boyden’s Synthetic Neurobiology Group.
“Brain science is the new rocket science.”Bryan Johnson, Kernel founder
In addition to these two big-budget, high-profile companies, other researchers and companies are exploring the development of neural interfaces. BrainGate is a long-running multi-institution research effort in the US to develop and test novel neurotechnology to restore communication, mobility, and independence in people whose minds are okay but who have lost bodily connection due to paralysis, limb loss, or neurodegenerative disease. Paradromics is focused on many more and smaller electrodes but aims for an even higher density of probes over the face of its neural implant. Synchron, based in Australia and Silicon Valley, has a different approach. The device avoids open brain surgery and scarring because it is inserted using a stent through a vein in the back of the neck. Once in position next to the motor cortex, the stent splays out to embed 16 metal electrodes into the blood vessel’s walls from which neuronal activity can be recorded.
And recently, a newly formed MIT research center was announced that is explicitly studying the fusion of the human body with advanced technology like robotic exoskeletons and brain-computer interfaces — with the ambitious goal of developing systems to restore function for people with physical and neurological disabilities. The new research center will fall under the leadership of MIT Media Lab professor Hugh Herr, who is a double amputee himself and has come to be known as a leader in the field of robotic prosthetics. In the MIT announcement, Herr said he sees this new initiative as an essential step toward eliminating physical disabilities altogether.
“We must continually strive towards a technological future in which disability is no longer a common life experience.”Hugh Herr, MIT Media Lab
The MIT faculty working within the new research center will have three primary goals, according to the announcement. The first is to develop what MIT is calling a “digital nervous system,” or tools that circumvent spinal cord injuries by stimulating muscles that have been cut off from the central nervous system — which is remarkably similar to an unrelated neural implant currently being tested in human volunteers. On top of that, the center aims to improve exoskeleton technology to help people with weakened muscles move around naturally, as well as to develop new bionic limbs that can restore a complete, natural sense of touch.
What are the ethical issues raised by the development of brain-computer interfaces? – Some of the most prominent are: how, if at all, should use of the technologies be limited; what ‘normality’ means; how can privacy be protected and which specific concerns, for example around surveillance, might be felt strongly by particular social groups; whether neural interfaces may contribute to widening inequalities; and what it means to be human.
“As our experience with social media has shown, we do need to think ahead to guard against possible harmful uses. If recent experience has shown us anything, it’s that individual consent and opting in or out is not enough to protect either individuals or society more widely.”Sarah Chan, Co-author, iHuman: blurring lines between mind and machine
My take – Brain-computer interfaces hold enormous promise as life-changing technologies for people with a variety of conditions. However, the technique is still in its infancy, and designing sensors that can effectively and safely monitor brain activity is a work in progress. Part of the issue is the complexity of the brain, and capturing this using a single sensor or affixing enough sensors in place which is extremely difficult. The future impact of BCI in terms of patient care is slowly starting to come into focus. As with most emerging technologies, regulatory, privacy, and reimbursement issues lag the development of the technology itself. For these and other reasons, the clinical use of these BCI technologies will be limited to Universities and health systems with comprehensive neuro service line programs for the foreseeable future.
2 thoughts on “iHuman? – Some Straight Talk on Brain/Computer Interfaces (BCI) in Health Care”
[…] iHuman? – Some Straight Talk on Brain/Computer Interfaces (BCI) in Health Care […]
[…] written on the general topic of brain-computer interfaces earlier. But with the rapid development happening in the rehabilitation segment, I thought it might be […]