fbpx
Top

Blog

Vizium360® > News  > AI Can Outperform Doctors. So Why Don’t Patients Trust It?
AI Can Outperform Doctors

AI Can Outperform Doctors. So Why Don’t Patients Trust It?

Topic: AI Can Outperform Doctors. So Why Don’t Patients Trust It?

Our recent research indicates that patients are reluctant to use health care provided by medical artificial intelligence even when it outperforms human doctors. Why? Because patients believe that their medical needs are unique and cannot be adequately addressed by algorithms. To realize the many advantages and cost savings that medical AI promises, care providers must find ways to overcome these misgivings.

Medical artificial intelligence (AI) can perform with expert-level accuracy and deliver cost-effective care at scale. IBM’s Watson diagnoses heart disease better than cardiologists do. Chatbots dispense medical advice for the United Kingdom’s National Health Service in lieu of nurses. Smartphone apps now detect skin cancer with expert accuracy. Algorithms identify eye diseases just as well as specialized physicians. Some forecast that medical AI will pervade 90% of hospitals and replace as much as 80% of what doctors currently do. But for that to come about, the health care system will have to overcome patients’ distrust of AI.

We explored patients’ receptivity to medical AI in a series of experiments conducted with our colleague Andrea Bonezzi of New York University. The results, reported in a paper forthcoming in the Journal of Consumer Research, showed a strong reluctance across procedures ranging from a skin cancer screening to pacemaker implant surgery. We found that when health care was provided by AI rather than by a human care provider, patients were less likely to utilize the service and wanted to pay less for it. They also preferred having a human provider perform the service even if that meant there would be a greater risk of an inaccurate diagnosis or a surgical complication.

The reason, we found, is not the belief that AI provides inferior care. Nor is it that patients think that AI is more costly, less convenient, or less informative. Rather, resistance to medical AI seems to stem from a belief that AI does not take into account one’s idiosyncratic characteristics and circumstances. People view themselves as unique, and we find that this belief includes their health. Other people experience a cold; “my” cold, however, is a unique illness that afflicts “me” in a distinct way. By contrast, people see medical care delivered by AI providers as inflexible and standardized — suited to treat an average patient but inadequate to account for the unique circumstances that apply to an individual.

Consider the results of a study we conducted. We offered more than 200 business school students at Boston University and at New York University the opportunity to take a free assessment that would provide them with a diagnosis of their stress level and a recommended course of action to help manage it. The results: 40% signed up when they were told that a doctor was to perform the diagnosis, but only 26% signed up when a computer was to perform the diagnosis. (In both experimental conditions, participants were told that the service was free and the provider made the correct diagnosis and recommendation in 82% to 85% of previous cases.)

In another study, we surveyed over 700 Americans from an online panel to test whether patients would choose AI providers when AI’s performance was clearly superior to that of human providers. We asked research participants to review information about the performance of two health care providers (called provider X and provider Y) in terms of their accuracy in diagnosing skin cancer or making triage decisions for medical emergencies, or the rate of complications associated with pacemaker implant surgeries that these providers had performed in the past.

Topic Discussed: AI Can Outperform Doctors. So Why Don’t Patients Trust It?

Read Original Article