Commentary Insights Research and Evidence Ethics, Regulation, and Responsible Use Diagnostics & Imaging

Medical AI: What shapes patient trust?

March 30, 2026 By Kathryn Wighton 7 min read
Share Share via Email Share on Facebook Share on LinkedIn Share on Twitter

Higher artificial intelligence (AI) performance was the strongest factor associated with patients’ trust in and selection of medical encounters using AI, according to a survey experiment of 3,000 US adults. Clinician presence, governance mechanisms, and disclosure of representative training data were also associated with increased respondent trust in and preference for encounters involving medical AI.

The findings come from a preregistered conjoint survey study evaluating how specific characteristics of AI-assisted medical encounters influence patient trust and choice. The study was published in JAMA Network Open.

Researchers presented participants with paired hypothetical visits involving AI-assisted diagnosis of a rash and asked respondents to select their preferred visit and rate trust in the diagnosis on a scale from 1 to 5. Across 6 repeated exercises per respondent, participants evaluated 12 hypothetical encounters, producing 36,000 observations.

The survey randomized 6 attributes of AI–assisted care: AI performance relative to physicians, the presence of a clinician during the visit, information about AI training data quality, US Food and Drug Administration (FDA) approval, certification by the Mayo Clinic, and certification by the local hospital.

AI Performance Had the Largest Influence

AI performance was the factor most strongly associated with visit selection. AI performing better than a specialist increased the probability that patients would choose that encounter by about 33%, while performance at the level of a specialist increased preference by about 25% compared with systems performing worse than a general practitioner.

AI performing at the level of a general practitioner increased the likelihood of visit selection by about 19%, an effect nearly identical to the influence of clinician oversight.

Trust ratings followed a similar pattern. When AI was described as performing better than a specialist, trust scores increased by 0.68 points on the five-point scale compared with systems performing worse than a general practitioner.

Clinician Presence Increased Acceptance

Participants preferred visits that included a clinician overseeing the AI system. The presence of a clinician increased the probability of selecting a visit by 18% compared with encounters involving AI alone.

Researchers evaluated clinician oversight as a "human in the loop," in which clinicians review, modify, or reject recommendations generated by AI systems.

In open-ended survey responses, respondents most frequently cited AI performance (about 26%) and clinician presence (about 23%) as the reasons for their preferences.

Governance and Data Transparency Also Improved Trust

Participants favored all forms of governance compared with the absence of oversight.

  • AI approved by the FDA increased visit preference by about 11%.

  • Certification by the Mayo Clinic produced a similar 11% increase.

  • Certification by a local hospital increased preference by about 8%.

The influence of local hospital certification was significantly smaller than that of federal approval or national certification.

Transparency about training data also affected preferences. AI trained on a dataset representing the US population increased visit preference by about 12% compared with systems providing no information about training data. AI trained on a disproportionately White, male, and wealthy dataset was neither favored nor disfavored compared with AI for which no training data information was provided.

Study Population and Design

Participants were English-speaking US adults recruited through an online panel between December 11, 2024, and January 1, 2025. The mean age was 48 years, and 55% were women. The sample included 13% Black, 17% Hispanic, and 62% White respondents.

The conjoint experiment used linear regression to estimate average marginal component effects, which quantify how each attribute changed the probability that respondents would select a visit.

Limitations

The researchers did note several limitations. The survey assumed patients were aware when AI was involved in their care and understood details such as governance structures and training data. Real-world clinical encounters may not provide this level of transparency.

The hypothetical scenario involved diagnosis of a rash, which represents a relatively moderate-risk clinical situation. Patient preferences may differ for higher-risk medical decisions or other clinical contexts.

In addition, survey responses may not reflect real-world behavior when patients face actual medical decisions.

Implications

Overall, the findings indicate that AI performance, clinician presence, representative training data, and governance mechanisms were associated with higher trust and preference for AI-assisted encounters.

“AI performance, clinician presence, disclosure of representative data, and systemic governance were associated with increased respondent trust in and preference for medical encounters with AI,” wrote lead author Ana Bracic, PhD, of the Department of Political Science at Michigan State University in East Lansing, and colleagues.

The researchers reported no conflicts of interest. The study received partial funding from the Michigan Institute for Data & AI in Society, the Michigan Institute for Clinical and Health Research, the University of Michigan’s Institute for Healthcare Policy and Innovation, the National Center for Advancing Translational Sciences, the Novo Nordisk Foundation, and the Greenwall Foundation Faculty Scholars Program.

Expert Commentary

ACCE Endocrine AI invited Ana Bracic, PhD, of the Department of Political Science at Michigan State University in East Lansing, and W. Nicholson Price II, JD, PhD, of the University of Michigan Law School and the Department of Learning Health Sciences at the University of Michigan Medical School in Ann Arbor, and the Centre for Advanced Studies in Bioscience Innovation Law at the University of Copenhagen Faculty of Law in Denmark, to comment on the study. Below are their responses.

What knowledge gap does this study address, and why are its findings important?

Ana Bracic, PhD

AI is transforming health, but we don’t know enough about what matters to patients. We haven’t known before how much patients care about different things that might make sure AI works well—governance versus human clinicians in the loop versus raw performance.  This study helps us compare patient reactions to different interventions. Instead of just asking patients what they thought was important, we measured how different things actually influenced patient choice and trust in the encounter.  Surprisingly, performance mattered most to patients, in terms of choice and trust, then clinician presence, then governance. The findings are important because they help illuminate how we can help patients trust AI: Make sure it performs well, involves clinicians, and involves appropriate governance.

From a clinical perspective, what do these findings mean for physicians?

W. Nicholson Price II, JD, PhD

Patients are more likely to choose and trust an encounter involving AI if it performs well, a clinician is involved, and it’s governed (through things like FDA approval, national certification, or local hospital certification).  Clinicians should know which of these things are true about the AI they’re using or considering, and should be prepared to communicate it to patients to help build trust.  Clinicians should also recognize that they’ve got a role to play in making sure AI use is appropriate, but that there are limits to what clinicians can do. Nicholson Price II, has an entire piece on this, Clinicians in the Loop of Medical AI.

Where should future research focus to build on these findings?

We’ve got to figure out how to strike the balance between providing the best care that's accessible to many patients and making sure it’s care they can trust. Because without trust, even the best AI systems won’t help. That will require determining what the best role for clinicians is in interacting with AI and helping patients understand how it's involved in their care.

Is there anything else you would like practicing physicians to take away from this work?

If they use AI in care—and that’s increasingly the case—it’s crucial to know how well it performs and how (and by whom) it’s been validated. It matters for care quality, and it matters to patients.

 

AACE Endocrine AI is published by Conexiant under a license arrangement with the American Association of Clinical Endocrinology, Inc. (AACE®). The ideas and opinions expressed in AACE Endocrine AI do not necessarily reflect those of Conexiant or AACE. For more information, see Policies.

Related Content