This study provides enormous value to the amount of confidence doctors apply to traditional radiology diagnoses compared to those bolstered by artificial intelligence. In addition, this study investigates some of the challenges and perceptions on the integration of AI into the clinical environment. Key themes that emerge from the findings include the interrelation between diagnostic confidence and professional experience, relying on radiologists, and mixed mind-sets about AI-assisted tools.
The most salient finding emerging from this study is a clear relationship between years of professional practice and the level of confidence in interpreting radiological images. Experienced practitioners, consultants, and registrars with more than ten years of experience expressed significantly higher levels of confidence in their ability to interpret radiographs independently, compared to less experienced medical doctors. This agrees with the generally accepted notion that clinical expertise develops gradually, dependent on exposure to complex situations, continued learning, and the development of skills in differential diagnosis in a stepwise process9. Experienced clinicians often possess more fully developed schemata, allowing them to recognize patterns in radiological images at a higher level. Expertise such as this is particularly important in radiology, as small changes in image appearance may greatly alter the results of diagnosis.
Another parallel finding is the negative correlation recorded between reliance on radiologists and confidence in the interpretation of radiographs. For example, those practitioners who evidenced low levels of confidence with a mean confidence score of 4.5 were more reliant on radiologists, having a mean reliance score of 8.5. This trend seems to indicate a compensatory effect, where doctors refer patients to specialists due to uncertainty, as complete reliance on oneself without developed relevant skills can result in diagnostic failure10. On the other hand, more experienced practitioners demonstrated less reliance on radiologists, which could be interpreted as an indication of their better judgments independently in routine cases11. This would suggest that confidence in medical image interpretation rises with clinical experience, but it also underlines an important role for the radiologist, especially for practitioners who are less experienced.
These challenges identified in this study regarding dependency on radiologists-that include time constraints and limited availability of skills-are not new. They have been documented in large amounts of academic literature, especially in contexts described by resource limitations.
35% of the respondents reported delays in receiving radiological reports, which indicate the usually critical gap between clinical needs and the production of diagnostic output, especially in emergency situations12. According to 21% of participants, there is usually a high volume of work for radiologists, which can also contribute to these delays. This can lead to delays on the patient's waiting time and impede quality care. Further, 26% of the respondents reported lack of access to radiological expertise as an increasingly heinous problem, particularly in deprived regions of Africa and parts of Asia. In these regions, shortage of radiologists imposes a serious stress on health services, and clinicians are at times forced to make diagnostic decisions bereft of requisite support.
This challenge has been highlighted in other studies, which have called for AI tools that could reduce this gap by providing automated preliminary reads for images13. However, it also needs to be weighed against the requirement for sufficient training and trust in the output from AIs. Results of this study showed an average rating of 5.35 out of 10 as the level of confidence in AI-assisted radiology interpretation, despite the increasing infusion of AI into clinical practice. This is indeed supported by literature reviewed during the conceptualization of this research, which has proved that AI, although promising, remains a tool approached with caution by medical professionals14. The interquartile range of confidence levels (2.5 to 7.5) indicates a significant variability in how comfortable practitioners feel using AI in their diagnostic process. This hesitancy can be attributed to several factors, including insufficient training on AI tools (cited by 16.9% of participants) and a lack of trust in AI-generated outputs (13%).
A major barrier to the integration of artificial intelligence in healthcare is the prevailing skepticism about its diagnostic competencies. Although AI has demonstrated the ability to match or even outperform human performance in specific radiological applications, significant concerns over its reliability in complex settings requiring fine-grained evaluative judgments persist15. Moreover, technical and integration challenges cited by 8.9% and 6.8%, respectively, still hinder the seamless integration of AI into current clinical workflows. This therefore emphasizes the need to improve the robustness of AI systems, as well as to validate their compatibility with existing healthcare infrastructure.
The strong preference in this study for conventional radiologist-led diagnoses over AI-assisted methods reported herein-66.7% vs. 13.3%-underlines the continuing trust gap between AI and human practitioners. This finding is consistent with previous studies, which have shown that doctors generally place higher confidence in human radiologists for interpreting complex cases16. Many practitioners feel that the interpretative skills of experienced radiologists cannot yet be fully replicated by AI. Besides, the preference for conventional techniques in the management of complex cases was 74%, which outlines that, though artificial intelligence may be immense in helping routine diagnostic work, it has not yet reached a stage co-extensive with human skills in difficult diagnostic situations.
This is further reflected in the reactions of respondents when considering the future of AI in care provision. Only 36.7% of the participants believed that artificial intelligence will eventually surpass traditional radiological diagnostics in terms of accuracy and reliability. This cautious view illustrates broad concerns about the limitations of current AI technologies, which include problems related to explainability, biases within algorithms of AI, and the generalizability of AI models across heterogeneous patient groups17.
This points to the still-existing barriers to wide AI adoption in healthcare because a large part of the participants took a relatively neutral stance on recommending these AI tools to their colleagues, 38.5%. This study is optimistic about the future of AI in radiology, despite the challenges and skepticisms outlined. The participants provided suggestions for ways improvements could be made to AI tools: improving the accuracy of these tools, better integration into existing systems, and improving user training. These are supported by other research that proffers continuous education regarding artificial intelligence systems as a way to instill trust and truthfulness in health professionals18. Increasing transparency and explainability of AI algorithms is the second effort that will ease many fears about using them in clinical decision-making. Results are indicative that while AI-assisted tools have a potential effect in improving diagnostic precision and easing the burden of strained time, yet their wide acceptance requires overcoming technical and cultural barriers. Confidence in the outputs of AI, ease of integration with existing systems, and an adequate training program for healthcare professionals are basic prerequisites for realizing the potentials of AI in patient care improvement.