- Doctors could soon be getting an extra hand in the office in the form of AI assistants.
- One telehealth startup is using an AI chatbot to analyze symptoms ahead of a doctor's visit.
- But AI isn't ready to diagnose you or be held responsible for its mistakes.
- This article is part of "Big Trends in Healthcare," a series exploring the top trends shaping the future of the industry.
In a 2022 study published by the Mayo Clinic, more than half of the physicians surveyed said they were experiencing burnout. While the distress of the pandemic undoubtedly drove many of these doctors to the brink, the authors of the report, from the American Medical Association, the Mayo Clinic, the Stanford University School of Medicine, and the University of Colorado School of Medicine, concluded that the problem is not that doctors aren't resilient in the face of challenges — it's that they are drowning in hours of administrative work.
But doctors could soon be getting an extra hand in the office in the form of AI assistants.
Note-taking, chart updates, and calls to insurers can cut into the time doctors have to spend with their patients. Before the pandemic, physicians who had to spend "excessive" time catching up on record-keeping at home were nearly twice as likely to report burnout, according to a survey published in the Journal of the American Medical Informatics Association. In recent years, doctors spent twice as much time updating medical charts as they did with patients, according to one Annals of Family Medicine study.
Enter AI. These high-tech assistants can "create time for physicians out of nothing," said Ran Shaul, a cofounder and the chief product officer at the telehealth startup K Health. And time is what doctors are sorely lacking.
The American Medical Association has warned that AI isn't ready to take on diagnoses yet because it can miss signs of illness doctors would likely catch or it could fabricate symptoms — but AI could offer the administrative support many doctors need.
Some AI companies are creating tools that could ease physician burnout
AI is already helping to balance doctors' busy schedules in several ways.
DeepScribe, an AI medical-scribe startup, can automate note-taking through a smartphone or tablet app. While transcription software is nothing new, DeepScribe's AI has the ability to recognize medical terminology and predict a doctor's conversation based on a database of notes owned by the company from more than 2 million doctor-patient interactions on which it was trained. The company has not disclosed how it obtained the initial data on which its AI was trained but claims it's HIPAA-compliant. By agreeing to DeepScribe's terms and conditions, doctors permit anonymized versions of their appointment transcripts to be used to improve the AI.
Microsoft's Nuance DAX Express administrative AI aims to do the same for virtual care at Teladoc, a popular telehealth platform. Microsoft's AI is trained to note critical information from a virtual patient visit, such as medical history, symptoms, and upcoming follow-up tests or scans. It then logs the details into Teladoc's records and billing system — though doctors will still need to validate the information.
And patients might like to see AI take on more — nearly 80% of people in one study published in the Journal of the American Medical Association Internal Medicine felt ChatGPT's responses to medical concerns were better and more empathetic than doctors' replies.
AI could be a helpful symptom checker, making doctor visits more effective
Rather than having an AI take notes and update a medical chart during a patient appointment, the telehealth startup K Health's online healthcare and mental-health platform has patients start with a chatbot to analyze symptoms before giving them the option to speak to a doctor.
K Health's chatbot asks a series of about 20 questions related to your medical history and current symptoms, then compares your information to anonymized clinical information from the more than 400 million doctors' notes it was trained on, setting it apart from ChatGPT, which is trained on text from all over the internet. (The company says the notes are anonymized, HIPAA-compliant records that K Health licensed from the Israeli health insurer Maccabi and the Mayo Clinic, as well as records of the company's own patient visits.)
"We're careful because it's not a diagnosis," Shaul said. After getting your results, the chatbot offers the patient the option to connect with a doctor. "Doctors have to validate what the chatbot says," and that contributes to "reinforcement training," meaning that any corrections or adjustments a doctor makes helps the AI learn and improve, Shaul explained.
Taking notes about her patients used to take up about a fifth of Dr. Stephanie Foley's time, but since she began practicing at K Health, Foley said she feels like she has "superpowers" thanks to the company's AI symptom checker. By the time she interacts with a patient, their chart is already filled out with everything they discussed with the chatbot. Foley, a primary-care physician and K Health's director of performance, told Insider that "it liberates mental space as well as, obviously, time to be able to focus on our passion": treating patients.
AI isn't ready to diagnose you or be held responsible for its mistakes
While AI is proving useful for administrative tasks, it's not about to replace your doctor.
Craig Spencer, a public-health professor and emergency-medicine physician at Brown University, compares AI like ChatGPT to a jazz musician: "It's incredibly well trained and it can play that note so confidently, but sometimes it is so confidently wrong that it's unbelievable."
These moments of wild improvisation, or "hallucinations" — convincing but fabricated answers an AI spits out if it can't find the correct information in its reference database — are a major bug in generative AIs like ChatGPT.
Plus, AI's role in medicine is new enough that there aren't clear laws or regulations governing who would be held responsible if it misdiagnoses someone, in contrast to physicians who can lose their licenses or be sued for malpractice for egregious errors.
Spencer suspects AI's reach into the American medical field will remain limited until we have laws on the books to determine where the buck stops when the technology goes off script.
"We live in a country where people sue McDonald's for giving them coffee that's too hot, so I don't think people in this country are going to be too happy that there's this nebulous thing making decisions for them and there's no recourse or any repercussions for when it gets it wrong," Spencer said.
"And wrong they will get it."