Microsoft’s Investment In Open AI – What It Could Mean in Healthcare

“We formed our partnership with OpenAI around a shared ambition to responsibly advance cutting-edge AI research and democratize AI as a new technology platform.z’

Satya Nadella, CEO, Microsoft
Image Credit: Shutterstock.com

Microsoft last week said that it’s extending its partnership with OpenAI, the startup behind art- and text-generating AI systems like ChatGPT, DALL-E 2, and GPT-3, with a “multi-year, multi-billion-dollar” investment. OpenAI says that the infusion of new capital — the exact amount of which wasn’t disclosed — will be used to continue its independent research and develop AI that’s “safe, useful, and powerful.”

Image Credit: CB Insights

The tech giant’s Azure cloud platform will continue to be OpenAI’s exclusive cloud provider, powering the startup’s workloads across research, products, and API services. Sources previously reported that Microsoft was looking to net a 49% stake in OpenAI, valuing the company at around $29 billion. Under the terms of one proposal detailed by Semafor, Microsoft would receive three-quarters of OpenAI’s profits until it recovers an investment as high as $10 billion, with additional investors, including Khosla Ventures taking 49% and OpenAI retaining the remaining 2% in equity.

Microsoft itself is currently using the developer’s language AI to add automation to its Copilot programming tool and wants to add such technology to its Bing search engine, Office productivity applications; Teams chat program, and security software. The Redmond, Washington-based company is putting DALL-E into design software and offering it to Azure cloud customers.

Image Credit: CB Insights, Big Tech in Healthcare report, 2022, page 62

So what might this significant investment in Open AI mean for healthcare applications? Here are ten potential use case scenarios that might come about from tighter integration with Microsoft products and services.

Patient Triage: By analyzing a patient’s symptoms and presenting a list of potential diagnoses and suggested next steps, ChatGPT can be used to triage patients. This can ease the pressure on medical staff and guarantee that patients get the proper care when they need it.

Virtual Health Assistant: It can create virtual health assistance for patients and can give information and direction about their diseases, available treatments, and self-care.

Medical Research: A lot of medical data can be analyzed using ChatGPT to find patterns and trends that can guide the creation of novel treatments and diagnostic equipment. Consider a model like ChatGPT trained on clinicaltrials.gov. A researcher could quickly get insights by asking a simple question, such as “What’s the most popular endpoint for all clinical trials on Crohn’s disease?” instead of scraping and evaluating all the data by hand.

Medical Transcription: ChatGPT can be used to transcribe medical reports and notes, eliminating the need for human transcriptionists and lowering the possibility of mistakes.

Clinical Decision Support: ChatGPT can help healthcare professionals make better informed and precise diagnoses and treatment decisions by offering clinical decision support.

Healthcare navigation: The goal here is to help patients navigate the complex healthcare landscape; for example, find a doctor in their network, compare prices, or learn about their benefits.

Health coaching and wellness: These chatbots try to engage patients to achieve a more healthy lifestyle. For example, they may remind patients to exercise and eat healthily daily. Examples of such chatbots include Doppel and 1-million-strong-to-prevent-diabetes. They may also assess the progress of a patient using existing clinical scales, such as PHQ9, which is a popular 9-question scale to assess depression.

Clinical trial recruitment: ChatGPT could be used to create chatbots that can help identify eligible patients for clinical trials and provide them with information on the trial process.

Medical Education: ChatGPT could be used to create virtual tutors to help students learn and retain medical information and to assist in creating interactive and engaging content for medical education.

Medical translation: ChatGPT could be used to translate medical documents and patient information from one language to another, making it easier for healthcare providers to communicate with patients who speak different languages.


While there are potential use cases in healthcare, it’s essential to understand the limitations and risks of using this technology. By far, experts named “hallucination,” the AI term for confidently making something up, as ChatGPT’s biggest problem. It’s susceptible to inventing facts and inventing things, and so the text itself might look plausible. Still, it may have factual errors, and in the medical domain, that can be extremely dangerous.

Another problem for the fast-moving world of medicine: Because ChatGPT was trained on a dataset from late 2021, its knowledge is currently stuck in 2021. Even if the company can regularly update the vast swaths of information the tool considers across the internet, it would be crucial for the system to be updated in near real-time to remain highly accurate for healthcare uses. Experts said having a ChatGPT-like AI that is updated in real-time is a long way off.

“I don’t believe in artificial intelligence. I believe in augmented intelligence — that is to say, as a human, giving me a statistical analysis of data of the past to help me make decisions in the future is wonderful. But turning my decision making over to a statistical model is not a very reasonable thing to do.”

John Halamka, president of Mayo Clinic Platform and co-founder of the Coalition for Health AI

Another weakness of the AI, for now, is that it is limited to text only. Even though ChatGPT put forth a decent performance on the USMLE, it couldn’t answer any questions that relied on images. Though there are already other AI platforms that process images — or, for that matter, audio, and video — there’s not yet an integrated platform for all of these functions. Since several medical disciplines rely on images for interpretation, diagnosis, and follow-up care (e.g., radiology, pathology, dermatology, and ophthalmology), this is a significant hurdle to overcome.

Just as new drugs demand evidence of benefits and risks, AI also needs rigorous scrutiny. But experts are worried that because AI doesn’t seem to be the same as a new therapy, it won’t undergo the same strict review that a new drug would. They’re also worried that constant changes in clinical practice or medical knowledge will impact the accuracy of such models. There are also many ways bias could unintentionally get built into machine learning models — not just at the dataset level but at the many levels of human involvement. That includes what patient populations the model gets used on, as well as the work of individuals who write model responses for training or rate AI-generated responses.

Experts said that, ideally, regulation would play a part in the ethics of AI, but they acknowledged the difficulty in regulating such a fast-changing technology. The Coalition for Health AI counts Google, Microsoft, the FDA, and the National Academies of Medicine among its over 140 members interested in figuring out the guidelines and guardrails that Halamka of Mayo Clinic Platform hopes will become a code of conduct for the industry.


Closing thoughts – Microsoft is planning to deploy OpenAI’s models across a variety of consumer and enterprise products. Microsoft is rumored to be preparing to challenge Google with ChatGPT integration into Bing search results. The company is reportedly considering integrating some language AI technology into its Word, PowerPoint, and Outlook apps. Since many healthcare systems use Microsoft Office and collaboration apps like Teams and also the Azure cloud platform, a properly implemented integration of ChatGPT applications could be easily adapted and implemented into daily care delivery.

It will be interesting to watch the developments in large language models at Google too. Google Research and DeepMind have launched an AI-based healthcare platform called MedPaLM. MedPaLM aims to generate safe and helpful answers related to the medical field. It combines HealthSearchQA and six existing open-question answering datasets that cover professional medical exams, research, and consumer queries. Medical professionals, and even non-professionals, can use MedPaLM. The platform can address multiple-choice questions via the delivery of various datasets. These datasets come from NedQA, MedMCQA, PubMedQA, LiveQA, MedicationQA, and MMLU. However, a new dataset was the HealthSearchQA, which aims to improve MultiMedQA. Answering more complex medical questions may be beyond MedPaLM’s capability. The platform is still a work in progress, and researchers are constantly looking for improvements that can make it better.

Image Credit: Google MedPALM research paper

As long as the appropriate safeguards are in place, there are opportunities to adapt the technology to good use.

Leave a Reply