In developing risk prediction models for specific diseases, it is essential to evaluate the calibration performance of the prediction model. Various methods have been proposed to assess the calibra- tion of prediction models, but it has been pointed out that conventional methods based on the predicted probability of the model are insufficient to detect miscalibration. Another problem is the inability to assess calibration for variables of interest, such as covariate of high importance in prediction. We therefore propose two methods to evaluate the calibration of the variable of in- terest: the variable-based probabilistic calibration plot (VPC-Plot), which is a visual assessment, and the variable-based probabilistic calibration error (VPCE), which is a corresponding evalua- tion metric. We conducted theoretical and simulation studies to investigate the properties and effectiveness of the proposed method. Theoretical and simulation studies demonstrated that the proposed methods can detect miscalibration by evaluating the calibration based on the variable of interest, even when conventional methods fail to detect miscalibration. To show the usefulness in the real-world data analysis, we evaluated diabetes prediction models developed using the na- tional health insurance database for Osaka, Japan. The proposed methods revealed the advantage of machine learning model over logistic regression models with regard to the calibration based on key covariate in diabetes prediction.