Take A Deep Breath – ChatGPT Is Not The AI Apocalypse

“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.”

Sam Altman, co-founder of OpenAI
Image Credit: Shutterstock.com

If you’ve been following the tech reporting over the last several weeks, there’s been a new AI panic brewing: this time, it’s about whether ChatGPT, a chatbot created by OpenAI, is poised to render vast swaths of our society obsolete. Even though OpenAI’s founder has said, “slow down, it’s too early,” that doesn’t seem to have stopped anyone from predicting the imminent demise of college, professors, journalism, and much more.

One Guardian piece opens with the warning that “professors, programmers, and journalists could all be out of a job in just a few years” and points to examples of ChatGPT and other chatbots mimicking the prose of Guardian opinion pieces, as well as generating essays for assignments made by a journalism professor at Arizona State University. Similar concerns were echoed in a Nature essay, which raises concerns about the ability of students to submit ChatGPT-generated essays. Or, how about this Axios story, titled “AI chatbot could spell doomsday for truth.”

“I think we can basically re-invent the concept of education at scale. College as we know it will cease to exist.”

Peter Wang, Chief Executive , Anaconda

The only problem, however, is that none of this seems to be accurate or even possible. ChatGPT is a large language model that effectively mimics a middle ground of typical speech online, but it has no sense of meaning; it merely predicts the statistically most probable next word in a sentence based on its training data, which may be incorrect.

Image Credit: Open AI

When considering applying ChatGPT or any other generative model to work-related activities, it’s essential to consider a fundamental limitation: Generative AI models generate responses; they do not read sources or cite their work. Thus, their output has no guarantee of reliability. If you ask it a critical business question like “what are the biggest risks associated with this investment” in an M&A transaction, you will receive a plausible-sounding answer, but not one with a source of truth. To use a generative model in the workplace, it is critical to feed these models with facts and cited information so that answers are based on research and not inference.

As expected, the press reports ranged from wildly optimistic to predictions of Armageddon. An article in the Harvard Business Review stated:

“This is a very big deal. The businesses that understand the significance of this change — and act on it first — will be at a considerable advantage. Especially as ChatGPT is just the first of many similar chatbots that will soon be available, and they are increasing in capacity exponentially every year.”

Ethan Mollick, Associate Professor of management, The Wharton School of the University of Pennsylvania.

At the other end of the spectrum was this article in The Times of London which went total “dark side” with this statement:

“An artificial intelligence programme which has startled users by writing essays, poems and computer code on demand can also be tricked into giving tips on how to build bombs and steal cars, it has been claimed.”

Tom Kington, The Times, Rome

The problem with the hype around ChatGPT is that we’re not talking about the real issues around using these chatbots. We should be asking instead questions along the lines of: “Are large language models justifiable or desirable ways to train artificial systems?” or, “What would we design these artificial systems to do?” or if unleashing a chatbot to a public whose imagination is constantly under siege by propagandistic and deceptive depictions of artificial intelligence (and using that public to train the chatbot further) is something that should be allowed to happen.


This week there was an interesting discussion of the topic in Office Hours Global online. While the discussion focused primarily on video production, the questions asked throughout were critical examples of how far we have to go in formulating a strategy for deploying these generative AI platforms. A link to that discussion follows (the debate begins at 1:01:59 in the video)

https://www.youtube.com/live/kae6hYLAVng?feature=share

Open AI’s DALL-E platform is fun to play with too. As discussed in the Office Hours Global conversation, you get some interesting results. Here’s what the AI-generated when I input “Princess Leia in the style of Rembrandt”:

Image generated using DALL-E AI platform

What about ChatGPT in healthcare?

Dr. Bertalan Mesko, the Medical Futurist, wrote a post with his initial thoughts on potential use cases in healthcare this week. It is important to note that he would certainly not use it in any way that could harm patients, like finding a diagnosis, where the slightest error could have dire consequences. Also, When he first started playing with it, it was prone to come up with fake stuff that looked convincing, just as it came up with three non-existing articles proving the benefits of echocardiograms for Systematic Sclerosis. It looked legit, but if you try to find the referenced articles in the scientific literature, you will find none, as many commenters pointed out. They are just things the algorithm fabricated. But he does think there are potential use cases that are simpler and less risky. Here are his examples:

  • Summarizing medical records based on patients’ family history, symptoms and lab results, among others
  • Summarizing and analyzing research papers: list keywords in an abstract or summarize a long and detailed research paper for physicians not working in that field of interest
  • Writing general texts, like emails, a book blurb or anything that saves time and you can further customize it for your needs and personal style
  • Answer broad questions
  • Work as a chatbot to answer FAQ-like questions for the doctors’ office, or handle appointments (see case example below)

I’m not sure I’m on board with all of his examples, but it makes for an interesting discussion.

There is also an article in The Journal of Medical Internet Research that outlined the following three major operational factors that could drive the adoption of GPT-3 in the US health care system: (1) ensuring Health Insurance Portability and Accountability Act compliance, (2) building trust with health care providers, and (3) establishing broader access to the GPT-3 tools. This viewpoint can inform health care practitioners, developers, clinicians, and decision makers toward understanding the use of the powerful artificial intelligence tools integrated into hospital systems and health care.

In the case illustrated below, the hospital is providing a chatbot triaging mechanism for incoming patients to reduce the overhead at clinics and increase the safety and quality of care during the COVID-19 pandemic. The chatbot has to be connected to the hospital network, combined with a triage text summary service that is to be reviewed, and stored in the electronic health record. In this example, triage could be initiated by a patient or a hospital to conduct a health screening.

Image Credit: Sezgin E, Sirrianni J, Linwood S
Operationalizing and Implementing Pretrained, Large Artificial Intelligence Linguistic Models in the US Health Care System: Outlook of Generative Pretrained Transformer 3 (GPT-3) as a Service Model
JMIR Med Inform 2022;10(2):e32875
URL: https://medinform.jmir.org/2022/2/e32875
DOI: 10.2196/32875

For me, the fundamental question is what roles algorithmic and automated systems should have in our education, healthcare, and business systems. Then you have the other thorny issues around ethics, legal, and copyright protections. We’re currently unprepared to answer questions like: Who owns the rights to the generated output of the AI? The complications associated with the legal aspects of Intellectual Property rights also come to the fore in text-based generative AI. Assuming that the text being trained upon is copyrighted, would you say the generated text violates those legal rights? Our regulatory bodies are nowhere near being ready to address those questions, and until they are, caution should be the watchword of the day when experimenting with these AI platforms.

Recent weeks have offered a glimpse into the current state of AI capabilities, which are both fascinating and worrying. Regulators must adjust their thinking to address coming developments, and the industry must collaborate with policymakers as we navigate the next AI platform evolution. For now, platforms like ChatGPT are enjoyable to play around with but not ready for primetime adoption in any critical business applications.

4 thoughts on “Take A Deep Breath – ChatGPT Is Not The AI Apocalypse

  1. Nice overview of this important topic, Henry. As with any new innovation there seems to be hysteria initially over its possible “taking over the world” and then things simmer down and take a more sane approach (see crypto-currency for example). Thanks for keeping us all up to date with all this new technology and especially your take on it.

    • Thanks for reading and commenting on the post Tom. If we use the Gartner Hype Cycle framework I guess you could say that ChatGPT is moving up the slope to the “peak of inflated expectations” stage. It will be interesting to see how this evolves.

Leave a Reply