Words in medicine carry weight—sometimes too much. For patients reading their electronic health records (EHRs), terms like “non-compliant,” “drug-seeking,” or “claims pain” don’t just sting; they can erode trust and, research shows, even change the care they receive.
A new study from Columbia University School of Nursing, published in Nursing Outlook, suggests that AI may be able to help fix the problem. The research team, led by postdoctoral scholar Zhihong Zhang, PhD, explored whether ChatGPT could spot and rewrite stigmatizing language in clinical notes.
Bias in the Chart, Bias in Care
Medical records aren’t just paperwork—they shape how providers view patients. Prior studies have linked stigmatizing terms to less aggressive pain management, diagnostic errors, and lower patient satisfaction. With the 21st Century Cures Act now giving patients full access to their records, the impact of a single word has never been greater.
“Addressing this isn’t just about better care—it’s about preserving the patient-provider relationship,” Zhang said.
What the Study Found
The Columbia team analyzed 140 notes from two urban hospitals and discovered, on average, two biased terms per chart. ChatGPT was tested in two roles: detecting stigmatizing language and rewriting it.
- Rewriting: The AI scored nearly perfectly (2.7–3.0 out of 3), reliably replacing biased words with respectful alternatives while preserving medical accuracy.
- Detection: Less impressive—ChatGPT caught only about half of all problematic language, though it did better with certain categories like “doubt markers” (e.g., “patient claims”).
In short: AI may not always notice the bias, but when it does, it’s adept at rewriting it.
Toward Equitable Documentation
The study is early-stage, but the implications are timely. As hospitals and health systems double down on health equity initiatives, tools like ChatGPT could eventually be integrated into EHR systems. The vision: real-time flagging of stigmatizing phrases, with AI offering clinicians more neutral, precise alternatives.
That could help reduce disparities in care that disproportionately affect marginalized groups—and avoid the awkwardness of patients stumbling across dismissive language in their records.
Challenges Ahead
Of course, embedding AI into medical documentation isn’t as simple as installing spellcheck. Clinicians will want safeguards to ensure accuracy, explainability, and accountability. And any system that rewrites language must tread carefully to preserve both clinical intent and legal clarity.
Still, the research points to a future where AI doesn’t just help diagnose disease but helps cure the health care system of its own biases.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI

Techedge AI is a niche publication dedicated to keeping its audience at the forefront of the rapidly evolving AI technology landscape. With a sharp focus on emerging trends, groundbreaking innovations, and expert insights, we cover everything from C-suite interviews and industry news to in-depth articles, podcasts, press releases, and guest posts. Join us as we explore the AI technologies shaping tomorrow’s world.