1017 articles were identified by database searching, of which 431 were initially excluded for irrelevancy (Fig. 1). Of the remaining 586, duplicates were removed (n = 219) and then full texts were reviewed for the remaining sample (n = 367).
Of these 367 articles, 56 articles were removed because they did not address the use of AI for screening or diagnosis. A further 63 mentioned AI used for screening or diagnosis in passing but did not discuss it. Typically, this was when articles were discussing AI more broadly, and mentioned a screening technology as an example. Nineteen articles were duplicates that were not identified initially due to having different titles or source names.
Finally, 43 articles were initially coded but later removed from the data after careful discussion since they were word-for-word reports on research abstracts. Typically, they were in sources targeted toward medical audiences, and only discussed study results and rarely their implications for use. Thus, they did not contain anything that could be analysed within a framing typology.
Of the final sample (n = 136), the majority were articles from various news sources (78.7%; n = 107). The remaining 21.3% was comprised of press releases (n = 18), blog posts (n = 9) and magazine articles (n = 2). Across the week days, Wednesday had the highest count of articles (n = 27; 19.9%), although they were distributed relatively evenly across Monday through Friday, with fewer articles published on Saturdays (n = 6) and Sundays (n = 12).
Table 2
Health conditions addressed in each article
HEALTH CONDITION | COUNT | % TOTAL |
Cancers (Multiple) | 16 | 11.8% |
Cardiovascular Disease | 9 | 6.6% |
Colorectal Cancer | 8 | 5.9% |
Breast Cancer | 7 | 5.1% |
Mental Health | 7 | 5.1% |
Alzheimer's Disease | 6 | 4.4% |
Lung Cancer | 6 | 4.4% |
Diabetic Retinopathy | 5 | 3.7% |
Kidney Disease | 5 | 3.7% |
Prostate Cancer | 4 | 2.9% |
Eye Conditions | 3 | 2.2% |
Bowel Cancer | 2 | 1.5% |
COVID-19 | 2 | 1.5% |
Intracranial Haemorrhage | 2 | 1.5% |
Neonatal Conditions | 2 | 1.5% |
Suicide | 2 | 1.5% |
Various | 21 | 15.4% |
Other | 29 | 21.3% |
TOTAL | 136 | |
Whilst some articles addressed multiple health issues or discussed AI in screening and diagnosis more broadly (n = 21), most of the articles addressed one specific health issue (Table 2). Most commonly this was cancer (n = 51; 37.5%), with some articles discussing multiple types of cancer (n = 16) and others addressing one specific type. Most frequently these were colorectal cancer (n = 8), breast cancer (n = 7), and lung cancer (n = 6).
The benefits of AI in screening and diagnosis were mentioned in 135 of the 136 articles (99.3%) whilst the ethical, legal, and social implications of the technologies were mentioned in only nine of the articles (6.6%). This generally positive perspective on AI in screening and diagnosis is reflected in mean impression and tone scores of 4.67 and 4.68 out of five, respectively.
Frame Analysis
Table 3
Tally of articles in each frame. Descriptions of frames from Nisbet (19)
Frame | Count | Mean impression /tone | Nisbet Frame | Count (%) |
Frame 1 – Social Progress | 132 (97.06) | 4.77/4.75 | Social Progress | 132 (97.06) |
Frame 2 – Economic Development/ Conflict and Strategy | 59 (43.38) | 4.88/4.88 | Economic Development | 59 (43.38) |
Conflict and Strategy | 1 (0.74) |
Frame 3 – Alternative Perspectives | 9 (6.62) | 2.44/2.55 | Morality and Ethics | 4 (2.94) |
Scientific and technical Uncertainty | 5 (3.68) |
Pandora’s Box/ Frankenstein’s Monster/ Runaway Science | 6 (4.41) |
Public Accountability and Governance | 5 (3.68) |
Middle Way | 3 (2.21) |
This overrepresentation of positively framed articles is also clear in the frame tallies. The Social Progress frame was identified in 97.1% of articles (n = 132) and Economic Development and Competitiveness in 43.4% (n = 59). The remaining frames were only found in fewer than 5% of the sample (Table 3). For the purposes of this analysis, we decided to combine some Nisbet frames because they consistently co-occurred, as shown in Table 1, and described below.
Frame 1 – Social Progress
The social progress frame dominated the rhetoric and was the dominant narrative in the majority of the articles. Broadly, this frame described a necessity to develop strategies for overcoming diseases and ailments, which represent large burdens on the health system and cause preventable death and disease.
In the social progress frame, diseases were problematized, and the authors tended to highlight a disease’s deadliness, its prevalence, or its increasing incidence within a country or worldwide:
“… build a seamless technology that helps providers more accurately detect heart disease, the leading killer in the world” [A82]
“with increasing incidence of cancer cases…” [A105]
Stories in the social progress frame typically implied problems were caused by inefficient current practices in screening and diagnosis which were characterised as “slow” [A21], “subjective” [A10, A27, A195, A244], “challenging” [A244] and “manual” [A5]. It was sometimes reinforced that these inefficient practices were overwhelming doctors and impeding their workflow or damaging their ability to spend time engaging with their patients.
With these issues laid as a foundation, the moral judgement implied in the articles in the social progress frame was that AI in screening and diagnosis was a good and important, or at least an inevitable, solution to address disease morbidity and mortality more effectively. In many of these articles, comment was sought from those with a stake in either developing, researching, or implementing the technology. Quotes were selected which reinforced the salience of the technology and their protagonist status in the article’s narrative.
“We are at a pivotal moment in healthcare history” [A42]
“This is no flash in the pan“ [A51]
At surface level, the suggested remedy was the AI screening or diagnosis technology (or in some cases, technologies) that the article was typically reporting on. This was clear in the rhetoric which, in contrast to their description of current screening practices, characterised AI screening and diagnosis tools with a different vocabulary. Whilst current practices were slow, AI was quick; whilst current practices were subjective, AI was objective:
"A key advantage of our technology is that it does not require any additional hardware other than a piece of paper and a software app running on the smartphone." [A252]
“But having objective, AI-based metrics for detecting AP-ROP is a step in the right direction” [A159]
"We want to have some readout of what's going on in the brain that is quantitative, objective, and sensitive to subtle changes," [A45]
More broadly, these technologies were sometimes constructed as being key to a pivotal change in the healthcare system. Sometimes, the importance of quick and easy screening was described in light of a transition within health systems from treatment to prevention [A42], or it was claimed that broader screening will lead to earlier identification of issues and thus better outcomes [A103]. This positioned AI as an important development towards lifting disease burden:
“… informed and strategically directed advanced data mining, supervised machine learning, and robust analytics can be integral, and in fact necessary, for health care providers to detect and anticipate further progression in this disease” [A88; emphasis added]
Frame 2 – Economic Development/Conflict and Strategy
The Economic Development frame was the second most common of Nisbet’s frames found in the articles. It overlapped conceptually with the single example of Conflict and Strategy found in the sample and as such, they will be addressed as one. All the articles in this frame coincided with instances of the Social Progress frame, so the arguments are not entirely distinct, with this frame tending to borrow from the strength of the Social Progress narrative. However, the Economic Development/Conflict and Strategy (ED/CS) frame tended to focus more dominantly on monetary rather than human costs, and commercial ventures rather than the diversity of projects reported on in the Social Progress frame. Articles in this frame also were more positive in impression and tone than the sample average (mean impression = 4.88; mean tone = 4.88)
Problem definition and causal attribution were often indistinct from the Social Progress frame with authors first problematizing the impact of a disease (or multiple diseases in rare cases), and attributing the problem to slow, subjective, or inefficient current systems. Sometimes, however, articles in the ED/CS frame additionally discussed the monetary cost of that the disease represents:
“In 2019, AD and other dementias will cost the nation $290 billion. By 2050, these costs could rise as high as $1.1 trillion.” [A88]
The moral judgements made in the ED/CS frame were more economically focused than that in the Social Progress frame. These articles generally sought comments from individuals with commercial interests in the technologies being reported on, and as in the social progress frame they were afforded protagonist status. In the case of the ED/CS frame, however, the worth and value of these commercial endeavours was often associated with their contribution to economic progress:
“Two Hyderabad-based start-ups … have come out with promising technological innovations in devising new platforms for delivering effective healthcare services for the public.” [A148]
[NAME REMOVED], Chief Executive Officer at [COMPANY NAME REMOVED], [says] ‘Our improved methylation-based technology has the potential to address gaps that exist with today’s screening options … Based on these positive data, we plan to advance development of our test toward commercialization.’
The vernacular used in these articles was also often very commercial. Often, algorithms were described as products which were developed to “disrupt” [A74] a “market” [A83, A137]
“We believe that [COMPANY NAME REMOVED]’s ability to apply cutting-edge principles of data science and patient-centered design holds great potential in disrupting the way these cardiovascular patients are monitored, diagnosed, and treated” [A74; emphasis added]
The instance of the Conflict and Strategy perspective, in this case, was an extension of these values into venture capitalism where the article described the company responsible for development of the algorithm as venturing to become “one of the top radiogenomics networks in the United States” [A68].
Implicit in this moral assessment was the argument that capitalist ventures such as these were important for social progress. As such, the suggested remedy in these articles was again very homogenous, with articles tending to document the technologies developed by one individual company, or one company’s technology, which was the key to reducing the economic costs associated with a disease. Ergo, technologies tended to be represented as economic solutions to largely economic problems.
“By offering a method to track progression using only a mobile phone or tablet … the company aims to stem the cost of monitoring and screening for Alzheimer’s and related dementias in an aging population.” [A18; emphasis added]
Frame 3 – Alternative Perspectives
Each of the Morality, Pandora’s Box, Scientific Uncertainty, Middle Way, and Governance frames from the Nisbet typology were present in some articles. However, they were indistinct from one another as they tended to be present, together, in articles that adopted a more neutral stance compared to the sample average (mean impression 2.44; tone 2.45). As such, we have dubbed the conglomeration of these frames, ‘Alternative Perspectives’. Nine total articles fit into the alternative perspectives frame, and generally more than one of Nisbet’s 5 initial frames which comprised the alternative perspectives frame were represented in each article (median 2; max 5; avg 2.27). This Alternative Perspectives frame overlapped entirely with articles which discussed ELSIs. That is, the nine articles coded into this frame are the same nine which discuss ELSIs of healthcare AI. Despite being relatively heterogenous within themselves, the articles which fell into Frame 3 were distinct in content and tone from the rest of the sample.
Five of the nine articles also coincided with occurrences of the Social Progress frame so in many of these articles the Social Progress narrative was also present and, in some cases, dominant. As such, problem definition often, like the other articles in this sample, involved the problematization of diseases and their impact. However, AI tools were also problematized in these articles as potentially risky. Many articles in this frame began by emphasising AI’s benefits, and then went on to offer a caveat:
“Of course, AI applications in sectors like healthcare can yield major social benefits. However, the potential for the mishandling or manipulation of data collected by governments and companies to enable these applications creates risks far greater than those associated with past data-privacy scandals” [A3]
As reflected in their sentiment scores, these articles were not entirely comprised of negative perspectives on AI in screening and diagnosis. The authors were instead presenting the narrative as a balanced appraisal of AI’s harms and benefits. Thus, stories implied that the issues related to AI were caused by the harmful capitalistic values of those developing AI tools (Morality), the AI field’s lack of involvement with traditional medical research (Scientific Uncertainty), or the poor legislation surrounding AI that has let it develop unbridled (Governance), rather than the AI technologies themselves.
“But the reason [RESEARCHER’S NAME REMOVED] hadn't heard of it is because the company hasn't shared information about the tool with researchers such as him, or with the broader medical and scientific community. Without that information, [RESEARCHER’S NAME REMOVED] said, big questions about [TECH COMPANY’S NAME REMOVED] suicide-monitoring tool are impossible to answer.” [A91]
“the values of AI designers or the purchasing administrators are not necessarily the values of the bedside clinician or patient. Those value collisions and tensions are going to be sites of significant ethical conflict” [A22]
“it’s important to remain cautious of these kind of claims, as AI can contain faults based on how it’s trained and designed” [A143]
The moral judgement made in these articles was that a more careful approach was needed, to harness the important social developments associated with AI but to simultaneously implement more controls so the issues and value conflicts were better managed. Often, in contrast to the other articles in this sample, these authors would seek out field experts who were not involved with the development of the AI tool(s) in question, giving their argument greater credence through impartiality.
“However, [DOCTOR’S NAME REMOVED] from Hanoi Medical University Hospital expressed concern that ultrasound is not an accurate method to diagnose liver cancer” [A260]
“’We as the public are partaking in this grand experiment, but we don't know if it's useful or not,’ [RESEARCHER’S NAME REMOVED] told Business Insider … It is the latest example of a trend in Silicon Valley, where the barriers that separate tech from healthcare are crumbling” [A91]
Typically, the solution presented by these articles was for a more regulated and cautious approach to AI in screening and diagnosis. Doctors and those in AI development were implored to be ‘ethical’ [A93] and it was proposed that only ‘explainable’ [A143; A22] or ‘auditable’ [A22] algorithms should be implemented.