The main aim of this study was to improve patient safety by developing a tool that prioritizes admitted patients at risk of medication errors to undergo medication reconciliation by clinical pharmacists. To determine the extent to which the objectives underlying this study have been met, first, we will discuss the existing tool used at the CHU of Reims to select patients admitted for medication reconciliation and the extent of its effectiveness. We will later discuss the results obtained using our developed tool and compare them with the existing tool used to determine how efficient our tool is.
At our CHU of Reims, medication reconciliation is performed on a random sample of patients who have been hospitalized for at least 24 hours and have had at least one prescription order. Here, we will discuss the results of the existing tool used at our CHU of Reims for selected patients for medication reconciliation in 2022. Out of 47,876 patients admitted to the hospital in 2022, only 2,037 (4.3%) were randomly selected for medication reconciliation. We found that medication reconciliation detected at least one unintended discrepancy in only 26% of the selected patients (536 of 2037 patients), leading to pharmacist intervention. These results can be read from two different points of view. On the one hand, the current procedure has succeeded in intercepting 26% of the positive patients who require pharmaceutical intervention. On the other hand, the presence of 26% of the patients within the selected random sample was considered high, which indicated that a large number of patients were not selected and may need pharmacist intervention. Due to limited resources and capabilities, the number of patients selected for medication reconciliation did not exceed 4.3% of the total patients admitted. Therefore, the presence of 74% of negative cases is considered a waste of the already limited resources that were supposed to be used to examine the files of other patients who may be at risk of medication errors.
In what follows, we will discuss the results of our ML tools and, in particular, the VC model, as it achieved the best results. The results in Table 2 show that the performance of the ML tools was affected by class imbalance, in which examples in training data belonging to one class greatly outnumber the examples in the other class. Two methods were used to balance the training dataset, allying a known undersampling method (ENN) and the oversampling method (ADASYN), to produce better defined class clusters. Our comparative results in Tables 2 and 3 show that using prediction tools with a balanced dataset provides more accurate results than does using an imbalanced dataset considering the recall metric.
Due to the imbalance of the dataset used for validation, the AUROC can provide a false impression of the quality of the classifier (see Table 2). As shown in Table 2, the AUROC was high, although the recall was not high, suggesting that the classifier performed well, which was not the case. However, although most of the negative cases are correctly classified, many of the positives may be misclassified, which is most important for model performance. We also noticed that (Table 3) despite the significant improvement in the recall metric, improvements in other metrics were not noticeable, especially in terms of precision, AUROC and AUCPR. The low precision is due to the increase in false positives, which can be confirmed by examining the AUROC curve in Fig. 1. The ROC curve plots the TPR versus FPR at different classification thresholds, and as the threshold decreases, the recall increases because we identify more patients who have medication errors. However, as our recall increases, our precision decreases because, in addition to increasing the number of true positives, we increase the number of false positives. Therefore, the AUCPR and precision were also used to evaluate the quality of the ML tools. Thus, the recall and AUCPR values in Table 2 show that the performance of the ML model is not good, contrary to what the AUROC value reveals. To confirm the above results, we balanced the validation dataset using the undersampled ENN method such that it contained an equal number of positives and negatives, which improved all the metrics used (see Table 4). The results in Table 4 clearly revealed that the improvement in all the metrics was associated with an increase in the number of false positives because of the imbalance of the validation dataset. Figure 2 also shows significant improvement in the AUCPR compared to the AUCPR, as shown in Fig. 1.
Although the above results are very promising compared to those of existing tools, we still need to test our ML tool in a real-life evaluation to confirm its effectiveness. Therefore, to validate our tool, we performed a retrospective evaluation simulating real-life use to confirm its effectiveness. To conduct this evaluation, data from 317 patients who underwent medication reconciliation during the period 11/27/2023 to 02/04/2024 were used. Of the 317 patients, 93 were found to have at least one unintended discrepancy, which led to pharmacist intervention. Our ML tool was used to prioritize the 317 patients in descending order according to their likelihood of experiencing a medication error. A specific number of patient files (110 files) with high scores were subsequently selected. On the other hand, a group of clinical pharmacists was also recruited to carry out a selection process for the same number of patient files (110 files) but randomly. Finally, for each approach, the total number of patients who had at least one unintended discrepancy was recorded.
To determine the extent to which the objectives behind this study were met, first, we will discuss the results of the random approach used to select patients admitted for medication reconciliation and its effectiveness. Then, we will discuss the results obtained using our ML tool for the same number of patients and compare them with those of the random approach. Figure 3 shows the results for both approaches. Using a random approach, 110 patients were randomly selected for medication reconciliation. Twenty-three of the 110 patients had at least one unintended discrepancy, while 87 patients did not. In other words, the random approach succeeded in identifying only 21% of the total selected patients. When using the ML tool, 110 patients were selected for medication reconciliation, and 45% of the total selected patients were found to have at least one unintended discrepancy, which led to pharmacist intervention, with an improvement of 113% compared to the random approach. These results confirm that the ML tool outperformed the existing tool used at CHU of Reims in terms of its ability to identify a greater number of patients exposed to medication errors. This approach could lead to increased capabilities of the clinical pharmacy department so that the medication reconciliation process would be more efficient and would reduce the burden on clinical pharmacists.
In summary, improving patient safety by reducing medication errors is a top priority in hospital health systems. Currently, medication reconciliation by clinical pharmacists is the standard method for preventing potential adverse drug events. These interventions are often limited by human resources, and patients at high risk for errors in their prescription orders need to be targeted. A strength of this study is that it included patients from internal medicine, surgical, and obstetric wards, as well as follow-up care and rehabilitation units; therefore, the findings are representative of daily clinical practice.
The single-center nature of our study was one of the limitations of this study. In addition, neonatology and intensive care unit patients were not included because these units do not use Easily software for medication prescription. Thus, there is no evidence of the accuracy of our model in identifying patients at high risk in these units; therefore, our model cannot be generalized to these patients.
It would be interesting in future work to improve the performance of our tool. The performance of an ML tool can be enhanced by obtaining abundant and accurate datasets. However, adding new and accurate data requires carrying out a large number of medication reviews, which is limited by a lack of well-trained human resources. This has motivated us to deploy our tool throughout other hospitals, as this will enable us to benefit from the expertise of a larger number of clinical pharmacists to confirm our findings and as an automated way to produce data that are used to update the model's learning dataset.