Johns Hopkins Nursing InfluencersMy Nurse Influencers

AI is making nurses second-guess their expertise —That’s dangerous

Share
By: Andre Nogueira, PhD and Kelly Gleason, PhD, RN

Silicon Valley has discovered healthcare—again. This time, the promise is artificial intelligence (AI) that can predict when patients will deteriorate, before stressed-out nurses notice the warning signs. Venture capital is pouring billions into health tech startups, major tech companies are launching AI-powered clinical tools, and ChatGPT just launched ChatGPT Health, all with the arrogant confidence of those who’ve never worked a double shift on an understaffed hospital floor.

While tech companies are racing to automate decision-making about patients’ care, nurses are paying the price. Picture this: You’re a nurse with 15 years of experience. An algorithm flags your stable postoperative patient as high-risk for worsening of their condition, citing subtle changes in vital signs. But you know this patient. You know she’s anxious about going home tomorrow, and that her heart rate runs slightly elevated at baseline. You can see she’s actually improving—better color, easier breathing, more alert than yesterday. Everything you’ve learned tells you she’s fine. But the screen insists something’s wrong. What do you trust—your eyes and experience or the machine?

This isn’t a hypothetical. It’s a real-life example of a common everyday dilemma, mentioned to me by a medical-surgical nurse who works with one of my doctors.

Cases like this are happening in hospitals across the United States. McKinsey surveyed 7,200 nurses and found that 61% can’t trust what AI suggests they should do. The nation’s largest nurses’ union found something even worse after surveying 2,300 nurses: Among those working where AI heavily directs care, 69% say the algorithm routinely contradicts what they know is right. Even worse, most can’t override the algorithm. The machine gets the final say.

What makes this especially absurd is that nurses are the ones generating the data these algorithms run on. Nurses document between 600 and 800 data points during a single 12-hour shift—roughly one data point every minute. This work consumes 25% to 40% of their time on the job. Nurses constitute the largest workforce group in hospitals, making up about 30% of all hospital employees. They’re both the primary authors of the clinical record and its most experienced interpreters. Yet the AI tools built on their documentation are increasingly positioned as more reliable than the judgment of the people who created that record in the first place.

And those data have limits that algorithms don’t understand. Pulse oximeters, a standard source of the vital sign data feeding AI early-warning systems, are known to be less accurate in patients with darker skin pigmentation. Research has found that nurses are 48% more likely to document communication to providers in the 4 hours before a missed low oxygen reading in patients with normal-appearing oxygenation; the failure is invisible in the record precisely when it matters most. When AI treats the flowsheet as ground truth, it inherits not just the data but every one of its blind spots.

This isn’t about technology that fails outright—that’s easy to discard. This is about tech companies building what’s technically possible, not what nurses and patients actually need. When your AI-driven smart thermostat fails, you’re cold for a day. When healthcare AI fails, people die. Lawsuits are already piling up against companies whose algorithms denied care that patients desperately needed.

But here’s what should worry us most: Current products are evolving fast to be sophisticated enough to seem authoritative while being wrong just often enough to erode professional confidence. The most trusted profession in America—the people who spend more time with patients than anyone else—are having to rely on machines that make them doubt themselves.

When AI gets it wrong, nurses second-guess their expertise

The problem multiplies when nurses work in patients’ homes, where context is everything and algorithms understand nothing. Maria, a colleague and home health nurse, visits her patient three times a week to manage his heart failure. She knows his apartment is on the third floor with no elevator, that his daughter brings groceries on Saturdays, and that he sometimes skips his afternoon medication because it makes him need to use the bathroom during his favorite TV show. An AI-powered remote monitoring system tracks his weight, blood pressure, and oxygen levels, generating alerts that appear on her tablet.

One Tuesday morning, the system flagged the patient as high-risk: weight up 3 pounds, blood pressure elevated. Maria rushed over, expecting a crisis. She found him laughing with his grandson, who’d visited over the weekend. They’d celebrated with his daughter’s homemade dumplings—his first real meal in weeks. His spirits were better than they’d been in months. The weight gain was temporary. His blood pressure was up because he was happy and excited.

“The machine saw numbers,” Maria said. “I saw a person getting better.” But after three false alarms in a month, she admits she’s starting to delay her response to alerts, trying to assess whether they’re “real.” The system designed to help her prioritize has instead trained her to ignore it—exactly the opposite of its intended effect.

Stories like these are often seen by the tech companies as “user resistance to innovation.” But blaming people makes them feel foolish and doesn’t resolve legitimate complaints.

There’s a better way

The solution isn’t to abandon AI in nursing—the potential is too significant. Rather, it’s to fundamentally change how we develop and deploy these tools.

This means spending less time in conference rooms with hospital administrators and instead investing in what designers call “behavioral prototypes”—crude, low-fidelity versions co-created with nurses and tested in actual care settings to discover how nurses will really use them. It means watching what happens when an alert goes off during medication rounds, understanding why a home health nurse ignores the tablet in favor of her handwritten notes, and noticing when a nursing home nurse develops workarounds that defeat the technology’s purpose.

It also means accepting an uncomfortable truth: The most valuable AI in healthcare might not be the systems that replace nursing judgment, but the ones that amplify it. Not AI that communicates falls after they happen, but AI that gives nurses the time and information they need to prevent falls themselves. Not systems that generate care plans, but tools that help nurses spend less time doing paperwork and more time delivering the care they’re trained to provide.

At Columbia University, Sarah Rossetti has developed one way to do that. Together with colleagues, she created CONCERN, an early warning system that’s built on studying how nurses work. It reduced the number of deaths and decreased the time people spent in the hospital, amplifying rather than replacing nursing expertise.

There’s a reason nurses remain the most trusted profession in America, year after year. That trust is built not on pattern recognition alone, but on presence, accountability, and deep contextual understanding of patients as people—not data points.

The AI tools that will succeed are those built through genuine partnership with nurses from day one, watching them work in hospitals at 3 AM, in patients’ homes where the WiFi doesn’t reach, and in nursing facilities where one nurse covers 60 residents.

When AI makes nurses feel smarter rather than inadequate, when it recognizes that the relationship between a nurse and patient contains information no algorithm can capture that’s when we’ll know we’ve moved beyond technological hubris toward genuine innovation that empowers nurses to deliver the best possible care to patients.

 


The authors work at Johns Hopkins School of Nursing. Andre Nogueira is an assistant professor and a systems designer and researcher focused on healthcare infrastructure transformation. Kelly Gleason is an associate professor and a nurse scientist focused on integrating patient-reported information with electronic medical record data to improve diagnostic processes.

*Online Bonus Content: This has not been peer reviewed. The views and opinions expressed by My Nurse Influencer contributors are those of the author and do not necessarily reflect the opinions or recommendations of the American Nurses Association, the Editorial Advisory Board members, or the Publisher, Editors and staff of American Nurse Journal.

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.


Poll

Julie NyhusGet your free access to the exclusive newsletter of American Nurse Journal and gain insights for your nursing practice.

NurseLine Newsletter

  • This field is hidden when viewing the form

*By submitting your e-mail, you are opting in to receiving information from Healthcom Media and Affiliates. The details, including your email address/mobile number, may be used to keep you informed about future products and services.

More Nurse Influencers