What happened in healthcare technology this week – and why it’s important.
Amazon gets the green light from the FCC to use radar for monitoring sleep
As Mallory Hackett reported in Mobihealthnews, the retail giant filed for a waiver last month that would allow it to develop radar sensors that operate at higher power levels than currently allowed. In its request, Amazon described two possible use-cases for heightened radar capabilities, including touchless device control through basic gestures and movements and contactless sleep-tracking. The radar features would only be available in “non-mobile” devices when they’re connected to a power source, which could relate to reports that Amazon is quietly building sleep apnea-detecting technology in its Alexa devices.
“Granting the waiver will provide substantial public benefit by, among other things, permitting the deployment of applications that can provide assistance to persons with disabilities and improve personal health and wellness,”FCC decision letter
Why it’s important – Amazon received its clearance based on a predicate device decision of Google’s Soli radar filing. In the case of Google’s second-generation Nest Hub, for example, a Soli radar sensor is used to track sleep patterns and things like people’s breathing rate in bed. Designed to be positioned on a nightstand, the intelligent display can monitor different sleep phases and then calculate an overall rating of how well the individual rested. The radar sensor can see movement at a granular level but no distinguishing features. That’s key if you want to encourage people to bring your device into their bedroom, where the idea of having an internet-connected camera switched on might be unpalatable.
The addition of Amazon adds to the list of companies providing touchless devices, including Google and Apple. Other digital health companies developing sleep-tracking technology include Itamar Medical, which recently acquired Spry Health to build out its wearable sleep apnea treatment, and Withings, which last year unveiled an under-the-mattress sleep tracking device.
Space-enabled drones deliver rapid coronavirus response
On July 13th in Tech Xplore online, The European Space Agency reported on a project with the NHS in Scotland using satellite-enabled drone technology to deliver medical supplies to remote areas during the pandemic. The project—which took place between June 2020 and May this year—was supported by ESA and the UK Space Agency as part of an initiative to accelerate the development of space-based solutions to COVID-19 and other pandemics.
The fleet took to the skies to ferry coronavirus tests and samples, medicines, and other much-needed equipment between medical practices in Argyll and Bute, a region of western Scotland that encompasses thousands of kilometers of coastline and several islands.
Drone delivery cut the average transport time in these sparsely populated remote communities from 21 hours using the existing road-based system to 60 minutes, enabling healthcare teams to provide COVID-19 diagnoses more speedily, which helped ease pressure on overstretched NHS services.
“Removing distance as a barrier to obtain faster results improved the quality and speed of service to patients, it also supported our doctors and nurses by providing faster results to aid and inform their decisions on care and treatment of their patients in our hospitals.”Stephen Whiston, head of strategic planning, performance and technology at Argyll and Bute HSCP
Why it’s important – While there have been several projects looking at utilizing drone delivery in Singapore, Berlin, and India, this public/private partnership is an excellent example of how a clearly defined set of outcomes can expand the scope of activity in healthcare.
Researchers use machine learning to translate brain signals from a paralyzed patient into text
In a fascinating article in STAT, news intern Claudia López Lloreda reports on a study published Wednesday in the New England Journal of Medicine where researchers from the University of California, San Francisco, described an approach that combines a brain-computer interface and machine learning models that allowed them to generate text from the electrical brain activity of a patient paralyzed because of a stroke.
Assistive technologies such as handheld tablets and eye-tracking devices are increasingly helping give voice to individuals with paralysis and speech impediments who otherwise would not be able to communicate. Now, researchers are directly harnessing electrical brain activity to help these individuals. In a departure from previous work, the new study taps into the speech production areas of the brain to generate entire words and sentences that show up on a screen.
The researchers implanted an array of electrodes in the patient’s brain, in the area that controls the vocal tracts, known as the sensorimotor cortex. They measured the electrical activity in the patient’s brain while he was trying to say a word and used a machine-learning algorithm to then match brain signals with specific words. With this code, the scientists prompted the patient with sentences and asked him to read them as though he were trying to say them aloud. The algorithm interpreted what the patient was trying to communicate with 75% accuracy.
“The critical neural signals [for speech production] exist and that they can be leveraged for this application,”Vikash Gilja, an associate professor at the University of California, San Diego
Why it’s important – Tapping brain signals to work around a disability is a hot field. In recent years, experiments with mind-controlled prosthetics have allowed paralyzed people to shake hands or take a drink using a robotic arm — they imagine moving, and those brain signals are relayed through a computer to the artificial limb. If the technology pans out, it eventually could help people with injuries, strokes, or illnesses like Lou Gehrig’s disease, whose “brains prepare messages for delivery, but those messages are trapped.”
Healing wounds and regrowing bones: Duke faculty develop futuristic biomaterial implants
An article in The Duke University Chronicle highlighted research into developing a metal, scaffold-shaped implant that could support the regrowth of a shattered bone. All that would be needed would be an initial CT scan, a virtual construction of the implant, and a metal printer to produce the final product.
Several Duke professors have made such futuristic biomaterial implants a reality, including Ken Gall, professor in the department of mechanical engineering and materials science; Shyni Varghese, professor of orthopedic surgery and Matthew Becker, Hugo L. Blomquist, distinguished professor of chemistry.
“It’s my hope that before I’m done with this … that one of the polymers developed in my lab actually makes it to a patient. And it looks like we’re getting close.”Matthew Becker, Hugo L. Blomquist distinguished professor of chemistry
Gall’s research focuses on the use of 3D printed metals and polymers, including the metal as a mentioned earlier scaffold, using synthetic hydrogels for cartilage replacement and other related explorations. He also has initiated a new project investigating the types of structures that can be printed and is looking into utilizing machine learning or different algorithms to predict how these structures will behave.
Varghese’s research focused more specifically on utilizing the biomolecule adenosine. It proposed a biomaterial implant that can sequester this released adenosine during cell stress and ensure it stays in the body for a longer time to promote healing.
Why it’s important – The research reported here holds great promise in avoiding devastating outcomes like amputation or loss of the ability to walk. Implants that utilize adenosine can promote bone formation and prevent bone degeneration. Nanocarrier drug implants that can be administered orally can potentially treat bone loss caused by osteoporosis.
Swift Medical creates an AI powered app for the remote monitoring and management of wound care.
In an article published in Business Insider (subscription required), Megan Hernbroth reported on Swift Medical’s latest Series B funding round that raised $35 Million. The company makes an app primarily designed for nurses or clinicians caring for patients with chronic wounds, such as those in diabetic patients. It uses 3D image modeling and artificial intelligence to remotely reconstruct the wound digitally for caregivers, including measurements like width and depth that are hard to standardize in traditional care practices.
Chronic wounds are common in patients experiencing diabetes and other conditions. These patients are often in assisted-living facilities or home-bound, requiring extra care from nurses or other caregivers. Home-care nurses or nurses in skilled nursing facilities don’t always get the training they need to assess and treat chronic wounds properly.
Why it’s important – Because chronic-wound patients typically have repeat visits to emergency rooms, Swift contends that their app helps to reduce the costs associated with those visits by taking preliminary images in the home or facility the patient is already in.