In total, thirteen patients responded to the invitation for participation, of which one patient was excluded as she was still undergoing treatment. Twelve patients were interviewed with ages ranging between 51 and 70 years (mean 61.6 ± 6.7). The duration of time since participants were primarily diagnosed with breast cancer was on average 6.5 ± 3.3 years, with a median duration of 6 years. Our findings showed that TNM varied in participants from 0 to 4. In addition, the majority of interviewed patients (n=11) underwent surgery, and some received additional therapies such as hormone therapy. Ten participants had a high education level (college or university), and all were employed in different sectors varying from justice to healthcare (see Table 1 for details). As a result of the participants' elaborations on AI's use in multiple scenarios, three major themes emerged: (i) familiarity with AI (ii) preferences for utilizing AI (iii) comparison of AI and MD. Here, we describe the characteristics of the participants, followed by a detailed discussion of each theme.
Table 1. Summary of participants’ characteristics (n = 12).
Characteristics
|
Descriptives, n (%)
|
mean ± SD
|
Age (years)
|
|
61.6 ± 6.7
|
Time since primary diagnosis (years)
|
|
6.5 ± 3.3
|
Tumor (TNM) stage
|
|
|
0
|
3 (25)
|
|
I
|
2 (17)
|
|
II
|
3 (25)
|
|
III
|
2 (17)
|
|
IV
|
1 (8)
|
|
Unknown
|
1 (8)
|
|
Treatment received a
|
|
|
Chemotherapy
|
7 (58)
|
|
Hormone therapy
|
10 (83)
|
|
Immunotherapy
|
2 (17)
|
|
Radiotherapy
|
9 (75)
|
|
Surgery
|
11 (92)
|
|
Educational level
|
|
|
Medium vocational training (MBO)
|
2 (17)
|
|
Higher vocational education (HBO)
|
6 (50)
|
|
Academic education (University)
|
1 (8)
|
|
Ph.D.
|
3 (25)
|
|
Employment sector a
|
|
|
Commerce and service
|
3 (25)
|
|
Education, culture and science
|
2 (17)
|
|
Environment and agriculture
|
1 (8)
|
|
Healthcare and wellbeing
|
7 (58)
|
|
Justice, security and public administration
|
3 (25)
|
|
Media and communication
|
1 (8)
|
|
aThe total number exceeds the participant count, as categories are not exclusive.
Domain 1: Familiarity with AI
The first theme, familiarity with AI, refers to the participants' knowledge and experience with AI technology, which was explored through their knowledge and prior experience with AI systems in different contexts. Although most participants had some degree of awareness regarding AI, a few participants had never heard of AI or were unsure of what it actually is. Of all participant characteristics, we identified two demographic variables (educational level and age) and former exposure to AI as the two main determining factors.
Demographics
Participants who were either highly educated (postdoc, n=3) or young (n=5) provided more detailed descriptions of AI. They characterized it as a prediction model that relies on analyzing data from past patients, or as a tool that can identify connections in data that are not easily discernible by humans.
“I think that based on a lot of data that you have from surveys or from people, from patients in this case, you might be able to make certain predictions with it in the future.’’[A12]
|
“to me it's the closest thing to how algorithms work. By computer, any kind of computer records human behavior keeps track of what patterns. and the computer is programmed to develop new patterns from that. The first part just is very good programming. And the second part I would call artificial intelligence. It no longer is inspired by people, but it is the computer which programs patterns. “
|
Former exposure to AI
Participants’ familiarity with AI in healthcare was influenced by their jobs. For instance, A 70-year-old participant with a background in nursing defined AI as a computer algorithm and provided one use case (in radiotherapy).
“I do read from time to time the radiology, for example, they are already using artificial intelligence to assess photos [radiology report/medical images].” [A1]
|
Half of the participants (n=6) viewed AI as statistical information that is generalized to fit average populations. On the other hand, only a few patients (n=3) could provide examples of personalized outcomes with AI in areas such as social media, suggesting that personalized AI applications are not yet widely understood or experienced.
“What I found annoying is that everything is based on statistics and averages. And no human being is average. So I was also like yeah. What good is all that information anyway? If you're just your own person. And, how reliable is it …” [A12]
“Yeah. That's exactly the downside of artificial intelligence. That it does give advice, but it does not address your personal needs at that moment and your situation. They are all averages.”
|
Domain 2: Preferences for utilizing AI
The second theme, preferences for utilizing AI, relates to the participants' attitudes and opinions towards the use of AI in various scenarios. In exploring these desirable scenarios, participants expressed greater enthusiasm for using AI applications in scenarios with low impact like predicting side effects (n=10) over situations that could severely impact their lives, such as predicting survival after treatment (n=5) or recurrence (n=4) (see Figure 1).
Low risk Scenarios
In our study, low-risk scenarios consisted of treatment recommendations and predicting side effects after treatment. The majority of the participants (N = 10) were willing to use AI for predicting the risk of side effects, with some citing their prior experience of encountering unexpected complications during treatment as the reason for their willingness to use AI.
“I 'd love to. I do hope it contains a lot of information, because if you mention side effects. Unfortunately I've had so many side effects from everything that every doctor says: you've had bad luck every time! …. I hope artificial intelligence gives a complete answer.” [A11]
“Yeah, what I mentioned of I have so many side effects, and still suffer from those radiation treatments. And that would be really nice, Then you hope that the app has an answer to that, that it's set right. Yes, then you don't have to worry for so long.“ [A3]
|
Out of 12 patients, seven expressed interest in using AI software for treatment recommendations, while four were uncertain and one participant showed no interest. Upon further analysis, it was found that participants were more inclined to use AI as an additional source of information rather than relying solely on it for decision-making purposes.
“On the one hand, I think it's very positive. Because then you get neutral advice at least. On the other hand, I would still think that I would go for it, but I would also want to talk to a specialist.” [A7]
“What does it add to the oncologist? Look, if he's as good as the oncologist I'd rather have a real human in front of me because it's a pretty intense disease with intense risks. And then when I think back to what you mentioned as examples, just of applications of artificial intelligence, the chat box and ads and stuff. That can be very useful, but you can also end up in a kind of bubble or keep getting the wrong answers because a machine doesn't really understand what you want to ask. And in that case I'd rather have a real person in front of me.” [A12]
|
High risk scenarios
Out of the 12 interviewees surveyed, around 42% expressed an interest in utilizing AI programs for survival prediction, whereas approximately 33% showed interest in using AI for recurrence prediction. Specifically, those who expressed interest in using AI argued that healthcare providers also utilize the same program and it is vital that patients know their odds. However, they also expressed a desire to have a conversation with their doctor.
“What with or without treatment. What if I just this treatment? What if I...? What are the chances of survival? Yes, I find that quite crucial to be able to make a decision.” [A7]
“I would be very excited… Because that's the question that's always on your mind and they can't say anything about that. But just imagine there was a program that could calculate it properly. I would like that.” [A7]
“I myself would find it very pleasant to have thought about the various possibilities in advance. And I don't see many dangers in that.” [A11]
|
Patients were negative or doubtful for a variety of reasons. Since the logic behind the predictions is not clear, participants were skeptical about the reliability of the AI program (n=2) or believed it is hard to predict survival or recurrence odds (n=6). Three out of twelve were reluctant to use AI programs since it requires digital skills or medical knowledge. In both scenarios, there were three patients who stated that they do not want to know the odds of their treatment outcomes, regardless of whether AI is involved or not.
“it depends on how such a program functions and on what basis. Yes, and if it's based on those statistics, then I don't trust it. Whether it's favorable or unfavorable doesn't matter that much.” [A12]
“ I don't need to know, I don't value it…. and it has to do with the unpredictable nature of cancer. That has nothing to do with the program.” [A9]
|
Figure 2 displays the variations in participant attitudes regarding the utilization of AI in medical decision-making across different scenarios. Among the total participants (n=12), only five individuals maintained a consistent attitude in both scenarios, while the remaining seven respondents exhibited shifts in their opinions. Notably, these changes in attitudes were observed even though participants employed similar reasoning. The motivations behind these shifts remain unclear, but it is worth noting that personal experiences may have played a role. For instance, one patient who had previously experienced a recurrence of breast cancer altered their initial negative stance on survival prediction to a positive outlook for disease recurrence. The findings depicted in this figure highlight the dynamic nature of patient attitudes towards AI-based decision-making and suggest that individual experiences may influence their perspectives.
“ [survival rate] But when I first got breast cancer, a year and a half ago, the prognosis was 95 percent will get better from this, that all looked very good. And three months after the oncologist had said that, there was suddenly a metastasis. And I have not even asked about the prognosis… So I don't need to know, I guess. You know, I don't value it anymore because the first time was disappointing.” [A9]
“[for recurrence] that's information you really want to know… I think that would give you peace of mind if you could…..’’ [A9]
|
Domain 3: Comparison of AI and MD
The third theme focuses specifically on participants' preferences for consulting AI, an MD, or a combination of both, and how these preferences are influenced by the (i) perceived performance, (ii) reliability, (iii) accountability, (iv) disagreement between the AI and MD, (v) interaction, and (vi) collaboration of both.
Perceived performance of the AI model
Perceived performance relates to how effective and successful participants believe the AI and MD to be and how this perception can influence their preferences. We found that patient attitudes toward the performance of AI and MDs differed based on their educational level.
Highly educated participants were more likely than those with lower educational levels to have a nuanced understanding of the differences between the two systems. They were aware that AI utilizes data and statistical analyses to generate predictions, while MDs rely on established protocols and previous experience to make treatment recommendations. However, this heightened level of education did not necessarily translate into greater acceptance of AI within this group.
The program has a lot more information and a lot more data. And the doctor, as far as I understand it, works mainly according to the protocol. So of course he looks at the specific situation, but he just has a protocol next to it, which steps to take. At least I heard the word protocol quite often during that period. And that means that if the protocol changes, the treatment will also change. [A10]
|
Reliability of the AI model
Reliability concerns the consistency and dependability of both AI and MD, particularly with respect to mistakes, and how this can influence preferences.The majority of participants (n=10) believed that MDs are more likely to make mistakes compared to AI programs. The participants frequently attributed human factors, such as fatigue, as causes of MD mistakes. While AI is often perceived as more accurate and reliable, all participants acknowledged that it is not infallible, due to reasons such as incorrect input data, system development based on erroneous information, and technical errors in the AI algorithm.
“A doctor's a human being. Nothing human is alien to us. And I think doctors are more likely to make mistakes … because of personal circumstances or assumptions or experiences that a doctor has that makes them think” [A7]
“Well, I would say that basically if the computer program works well that it shouldn't make mistakes. But again, it's only as good as the people entering the information. And if there's a malicious person who suddenly enters all kinds of data into that algorithm that's not correct, then of course you do have a very wrong program.’’[A1]
|
Accountability
Accountability refers to how responsible participants perceive the AI and MD to be for their outcomes. Participants generally believed that MDs are more accountable for their decisions and actions than AI, but accountability for AI is complex and multifaceted. Concerns were raised about the lack of transparency in AI decision-making processes and the difficulty in holding AI systems accountable for errors or biases.
“Can't really pinpoint anyone I think. Was just your trust in that program at that time. And who is responsible is so difficult. Yes, it's not the person who introduced it, it's not the program itself. Wherever work is done, mistakes are made. I find it difficult to really point the finger at someone. It is very, very difficult.“ [A4]
|
Disagreement between the AI and MD
While both AI and MD can provide advice or recommendations based on their analysis of medical data and information, there may be instances where the recommendations provided by AI and MD differ. Participants expressed that when faced with conflicting advice from the AI program and their MD, decisional conflict arises. They stated that in such instances, they may choose to seek a second opinion or engage in a discussion with their MD to better comprehend the reasoning behind the conflicting opinions
Yeah, but why did you develop that app [AI program]? Yeah. We're not gonna do that. Because then I wouldn't trust the app or the doctor. That sounds strange but then I might still want a second opinion. No, then I do not know in whom I no longer have confidence. That wouldn't be possible. No. Unless the doctor comes up with such good arguments. {A3}
|
Interaction between the MD and AI
The interaction between a doctor and patient involves communication and information exchange related to the patient's medical condition, treatment options, and healthcare decisions. Most patients (n=7) would prefer to have MDs involved in their medical decision-making process, even when an AI program is available and may outperform the MD. These patients expressed that they placed a high value on human connection, empathy, and the ability to ask questions and receive personalized explanations from an MD.
With all the human failings involved. Why? Because at least in my experience I can use my own brain and ask 'yes but' questions. And if he says, on the basis of that conversation, "Yes, but that would be my advice. Then it has been an interaction. And with the computer you can't do interaction. At the most you can go back to the beginning of the program and fill in some other data. {A11}
|
Collaboration
Collaboration between AI and MD in healthcare involves utilizing AI systems to assist MDs with various aspects of medical practice, such as treatment planning. This approach takes advantage of the unique strengths of both AI and MD, combining the precision and speed of AI algorithms with the human expertise and judgment of MDs to improve patient care. All the participants agreed that the idea of MD using AI is the optimal approach for patients, as both can complement each other's strengths and weaknesses. In this scenario, patients believe that even if a mistake was made, they had given their best effort, and no one is to blame.
“I think that's going to be the future when I see it like this…. That would put my mind at ease. Then I think, well, all the information that's been put into it brings me this. And the doctor has looked at it and he says well, I can't think of any other ways, this is the right way for you… it's the best of both worlds.” [A8]
|