In this pilot study, the authors acquired body navigation-loaded ultrasound images that included information regarding the inspection site and transducer location. In the ultrasound image analysis, the mean score of interpretations of body navigation-loaded ultrasound images increased significantly for all raters compared with conventional ultrasound images without body marker. In addition, it was confirmed that the inter-rater agreement was improved in the second analysis that used the navigation-loaded ultrasound image set. These results were obtained not by physicians but by experienced gastrointestinal radiologists who are familiar with ultrasound imaging. Thus, body navigation-loaded ultrasound imaging is expected allow radiologists to interpret ultrasound images more accurately and objectively. The authors also believe that it will certainly be helpful to physicians who may be unfamiliar with the ultrasound image. However, based on our results, the presence or absence of navigation-loaded images did not significantly affect the target organ recognition of easily recognizable organs, such as the left lateral section of the liver, portal vein bifurcation, spleen, or urinary bladder. These organs were recognized nearly 100% of the time in all the primary and secondary analyses.
In the present study, there was a statistically significant difference in the interpretation scores of the reviewers (Table 2). Reviewer A (senior radiologist) had a higher interpretation score. Apart from the low scores of the less experienced resident, the difference in the scores of the secondary evaluation in which the two experienced radiologists interpreted the navigation-loaded ultrasound image was caused by the transducer location interpretation. When evaluating the boundaries of nine regions of the abdomen, the senior radiologist’s interpretation of the transducer location was more consistent with the operator's intention. Because subjective interpretation of the boundary area was possible, the inter-rater agreement of the transducer location also showed lower agreement than that of the other evaluation categories (Table 3). If the four abdominal quadrants with a clear anatomical landmark of the umbilicus were used as the standard for transducer location interpretation, it was expected that all raters would show a higher interpretation score and a high degree of agreement.
The most important advantage of navigation-loaded ultrasound is that it increases the operator’s convenience and is expected to allow the operator to focus solely on ultrasound examination. In some cases, the operator’s increased effort may be required while adding appropriate body marks or text available on the ultrasound equipment to ultrasound images. This process is cumbersome and time-consuming because radiologists or clinicians must make a diagnosis while taking ultrasound images and add a body mark to the images using various control buttons. Recently, using commercially available products like touch screen labels and scan assistant make the process of labeling very stream-lined and easy [8]. However, the authors’ navigation-loaded image can minimize this process because the exact information regarding the transducer and inspection site is automatically integrated with the ultrasound image in real time. Thus, this navigation-loaded ultrasound image is expected to assist in interpreting ultrasound images where the marking of specific locations is required, or where the distinction between the right and left sides is important. The disadvantage of the authors' semi-automated technology compared to the existing ultrasound body mark system is that it may take a short time to fix the camera and set the software just before starting the ultrasound examination.
The body navigation-loaded ultrasound technology developed by us did not require highly advanced skills or equipment, but there were some issues that had to be addressed during its development. The first issue was to protect patients’ privacy, such as faces or breasts. This study aimed to minimize the exposure of sensitive areas of the body by applying a 3D mesh filter while maintaining the shape, size, and ratio of the body taken by the 3D depth camera. A 3D depth camera allows utilize an object recognition function that cannot be implemented with a general camera, thereby the body can be animated. Additionally, the location of the transducer could appear as it was, without modification. The result was a more intuitive understanding of the navigation-loaded ultrasound image. The second issue was to determine how to simultaneously acquire ultrasound images and 3D depth images to increase operator convenience. Because it is not efficient to use the ultrasound image acquisition button and the camera capture button separately, this problem was solved by setting the camera to simultaneously capture an image when the ultrasound image acquisition button was pressed, using the MeshGateway software on a computer connected to the ultrasound system. The third issue was to determine how much inspection site and transducer information should be included in the ultrasound image in photos taken with a 3D depth camera. With the camera fixed in a position above the patient's head, body parts other than the inspection site were cropped, and adjustments were required before starting the ultrasound examination so that the inspection site and location of the transducer did not deviate from the cropped area. After this issue was resolved, it was necessary to decide where to place a navigation-loaded image on the conventional ultrasound image. Care was taken not to overlap the information regarding the ultrasound parameters or scanned ultrasound images. Depending on the ultrasound vendor and specific model, it was expected to be variable, but we considered it better to insert the thumbnail in the upper right part of the ultrasound image. While using convex transducers, navigation-loaded image could be properly inserted into the empty portion of the ultrasound image that appeared to be radial.
There are several limitations to this pilot study using this experimental technology. First, the diagnostic usefulness of this technology compared to the conventional ultrasound body mark system has not been evaluated. Comparing different methods of labeling images may have resulted in better outcome. Some physicians may feel that the technology has no significant advantage over the current graphic body mark system. Although this pilot study was conducted only on abdominal ultrasound, where the standard view protocol is widely used, it is expected to be useful in other organs such as the musculoskeletal joints, peripheral vascular systems, and breast, where the inspection site and transducer information is more important than in the abdomen. Figure 5 demonstrates that this technology appears to be useful for identifying ultrasound images of various organs. Second, protecting patients’ privacy and increasing the accuracy of the inspection site and transducer information are trade-offs for each other. In this study, navigation-loaded images were acquired with a low-resolution thumbnail, focusing on the protection of the participants’ privacy. In addition, we are developing a program using artificial intelligence that automatically crops sensitive areas [9, 10], removes patients’ faces [9–11], or completely replaces them with a simple body mark according to the inspection site and maintains the appropriate field of view. Nevertheless, due to patients’ privacy issues, it is expected to be of little or limited use in gynecologic transvaginal or translabial ultrasound. The solution to this is to turn off the camera function in the software and then proceed with the conventional ultrasound examination. This method can be easily applied even if the patient refuses a new body marking system. Third, because camera installation and special software settings are required before starting the ultrasound scan, this process would be quite cumbersome. To overcome these shortcomings, the authors plan to develop an embedded system that applies the aforementioned artificial intelligence technology. Nevertheless, the authors' method is still applicable only to a permanent ultrasound room, and it will be difficult to apply to a portable ultrasound system. Fourth, our system is expected to be inaccurate for cineloop images because it only provides information about the static ultrasound transducer. However, we will be able to express the movement of the probe if necessary through software improvements in the future. Fifth, although the inter-rater agreement in the body navigation-loaded image increased, the kappa value did not increase almost perfectly. Thus, it cannot completely replace existing annotation methods such as text or arrows in a specific condition.