Training Healthcare AI for Patient Care, Not Diagnosis, with Munjal Shah

Munjal Shah, the founder of medical AI startup Hippocratic AI, aims to leverage large language models (LLMs) to provide crucial but nondiagnostic healthcare services. With the recent explosion in LLM capabilities shown by models like ChatGPT, researchers, and entrepreneurs have envisioned new use cases. Shah sees an opportunity to address chronic care and patient navigation staffing shortages. However, he stresses that Hippocratic AI won’t make high-risk clinical judgments, sticking to a “first, do not harm” philosophy.

Much of healthcare’s strain comes not from intricate medical judgments but more essential elements of patient interaction and care coordination. LLMs can absorb massive amounts of information and then communicate it conversationally. As Shah puts it, they have “infinite time” for the patient story, unlike the average doctor cutting patients off in 20 seconds. With scale, language AI could provide round-the-clock personal chronic care or perfectly timed reminders impossible for human staff alone. What Shah calls “bedside manner with a capital B.

Surprisingly, emotionless AI already shows more empathy than doctors in specific exchanges. A recent JAMA study found that AI responses preferred quality and empathy 79% of the time. The AI was rated empathetic at 45% compared to doctors’ 5%. The difference highlights providers’ emotional burnout and limitations. AI, like LLMs, lacks emotions but can mimic empathy. Properly trained, their tirelessness and consistency could enhance care communication.

For safe real-world use, though, healthcare AI needs particular constraints. General LLMs like ChatGPT compile broad internet knowledge prone to mistakes. Hippocratic focuses training on peer-reviewed medical evidence, not the “common crawl.” It also uses healthcare staff feedback on responses for reinforcement learning. So far, it has outscored competitors on 114 role-based medical exams and bedside benchmarks.

Ongoing scrutiny by human experts provides indispensable guidance. But Shah ultimately sees AI assistance as necessary to manage swelling patient demand. He stresses care improvement from additional patient engagement, not the replacement of human clinicians. With responsible constraints and testing, tools like Hippocratic hint at AI’s huge pending impact on medicine’s human limitations. The technology remains early, but the momentum is expected to increase surrounding safe applications for patient communication and care at scale.

What is your reaction?

In Love
Not Sure

You may also like

Comments are closed.

More in:Health