What happened in health care technology this week, and why it’s important.
Pear Therapeutics files for bankruptcy
Pear Therapeutics, a maker of prescription digital therapeutics, announced today that it has filed for Chapter 11 bankruptcy and is seeking a sale of its business or assets. Jessica Hagen reports the story in her article in MobiHealthNews. The company will continue its scaled-down operations during Chapter 11 as it seeks a sale, and Pear will use its available cash to fund its operations and costs post-petition.
Why it’s important – The publicly-traded company wasn’t immune to the many layoffs throughout the digital health sector. In November, Pear said it would cut its workforce by about 59 employees, or around 22% of the company. In July, it let go of approximately 25 employees, making up about 9% of its workforce.
Infographic of the week – According to Gartner experts, these 4️⃣ key themes will prove critical for product leaders to evaluate as part of their competitive strategy:
➡ Smart world
➡ Productivity revolution
➡ Transparency and privacy
➡ New critical technology enabler
AI-equipped eyeglasses can read silent speech
Louis DiPietro from the Cornell Bowers CIS College of Computing
and Information Science highlights this development in the Cornell Chronicle. Developed by Cornell’s Smart Computer Interfaces for Future Interactions (SciFi) Lab, the low-power, wearable interface requires just a few minutes of user training data before it will recognize commands and can be run on a smartphone, researchers said. Outfitted with a pair of microphones and speakers smaller than pencil erasers, the EchoSpeech glasses become a wearable AI-powered sonar system, sending and receiving soundwaves across the face and sensing mouth movements. A deep learning algorithm, also developed by SciFi Lab researchers, analyzes these echo profiles in real-time with about 95% accuracy.
Why it’s important – Acoustic-sensing technology like EchoSpeech removes the need for wearable video cameras. And because audio data is much smaller than image or video data, it requires less bandwidth to process and can be relayed to a smartphone via Bluetooth in real-time. In its present form, EchoSpeech could be used to communicate with others via smartphone in places where speech is inconvenient or inappropriate, like a noisy restaurant or quiet library. The silent speech interface can also be paired with a stylus and used with design software like CAD, all but eliminating the need for a keyboard and a mouse.
Podcast of the week – Vital Sounds – The Nocturnist’s Emily Silverman talks with medical student Melanie Ambler, who is also a cellist, and shares a story about the most memorable cello concert she’s ever performed, which happened to be over Zoom with an audience of only one person. In addition to her medical studies at Stanford, Melanie works as a musician “on call” for Project: Music Heals Us, which you’ll hear more about in the conversation that follows her story. She graduated from Brown in 2019 and was awarded a Fulbright Fellowship to study music and dementia in France. Melanie told her story live at The Nocturnists show in San Francisco in June 2022 and then, as you’ll hear in our conversation, again in Chicago in September 2022 at a special show we did at the Women in Medicine Summit. And it’s that second performance that you’ll hear in this episode today. In the following conversation, we’ll talk about Melanie’s love of her instrument, her experience working with The Nocturnists’ story coaches and a bit more about her work at the intersection of music and medicine. You can listen to the podcast here.
Mind-Controlled Robots: New Graphene Sensors Are Turning Science Fiction Into Reality
It sounds like something from science fiction: Don a specialized, electronic headband and control a robot using your mind. But now, recent research published in ACS Applied Nano Materials has taken a step toward making this a reality. By designing a special, 3D-patterned structure that doesn’t rely on sticky conductive gels, the team has created “dry” sensors that can measure the brain’s electrical activity, even amidst hair and the bumps and curves of the head.
The team created several 3D graphene-coated structures with different shapes and patterns, each around 10 µm thick. Of the shapes tested, a hexagonal pattern worked the best on the curvy, hairy surface of the occipital region — the spot at the base of the head where the brain’s visual cortex is located. The team incorporated eight of these sensors into an elastic headband, which held them against the back of the head. When combined with an augmented reality headset displaying visual cues, the electrodes could detect which cue was being viewed, then work with a computer to interpret the signals into commands that controlled the motion of a four-legged robot — completely hands-free.
Why it’s important – Most non-invasive versions involve the use of “wet” sensors, which are stuck onto the head with a gloopy gel that can irritate the scalp and sometimes trigger allergic reactions. Although the new electrodes didn’t yet work quite as well as the wet sensors, the researchers say that this work represents a first step toward developing robust, easily implemented dry sensors to help expand the applications of brain-machine interfaces.
BioGPT: A game-changer for pharma and healthcare
ChatGPT has already made waves and has been deployed to write codes, new poems, songs, recipes, and whatnot. Language models that use transformer-based architectures, such as GPT (Generative Pre-trained Transformer), facilitate the analysis of large, complex datasets and generate human-like responses to questions. Amandeep Singh, a senior consultant at MP Advisors, reports on the benefits and potential pitfalls in his article in Pharmaphorum online.
Microsoft recently released a new AI language model, BioGPT, specifically designed for the life sciences industry. The model has been trained on a diverse set of biomedical text data, including scientific publications, clinical notes, and drug labels, making it an invaluable tool for scientists across various life science domains.
Why it’s important – Compared to GPT models that are trained on more general text data, BioGPT has a deeper understanding of the language used in biomedical research and can generate more accurate and relevant outputs for biomedical tasks, such as drug discovery, disease classification, and clinical decision support. BioGPT is also able to capture the nuances, subtleties, and syntax of the biomedical language, such as differentiating between drug names, gene names, and protein names, which is essential for many biomedical applications.
Current Health CEO: At-Home Care Models Could Lead to Better Staff Retention
Many healthcare workers prefer providing at-home care because it allows them to provide more personal, less hectic care than they can on the hospital floor, said Current Health CEO Chris McGhee. He thinks health systems should realize that switching more healthcare workers to at-home care could help alleviate the burnout crisis and improve staff retention levels. Katie Adams brings us the interview in her article in MedCity News.
Why it’s important – If you’re on a floor in the hospital, you may get 30 seconds with a patient, then you get paged away, and you have a job list that’s two pages long. If you’re in hospital-at-home, you’re going in for an hour. You’re spending an hour with that patient in their own home — seeing how they live, spending time with them and their caregivers, and really getting to understand them and deliver care in a much more personal and holistic way than you can inside the hospital. Because of this, the work nurses do under at-home care models feels a lot more aligned with the reason they decided to join the healthcare sector in the first place — to help people and feel like they’re making a difference.