Survival analysis has currently become an essential statistical research hotspot that models the time-to-event information with data censorship handling. Such technique yields extensive applications in carcinoma treatment and prediction. Deep neural networks (DNNs) also exhibit unusual appeal for survival analysis because of their non-linear nature. However, DNNs are often described as “black box” models. A black box model is a model that is extremely hard or practically impossible to explain. In this study, we propose an explainable deep network framework for survival prediction in cancer. We utilize the strategy of the nearest distance retrieval in case-based reasoning (CBR) to provide useful insights into the inner workings of the deep network. First, we use an autoencoder network to reconstruct the features of a training input. We create a prototype layer, which can store the weight vector following the encoded input and to receive the output from the encoder. In this deep survival model network, the total loss function consists of four terms: the negative log partial likelihood function of the Cox model, the autoencoder loss, and two interpretability prototype distance terms. We use an adaptive weights approach to combine the four loss terms. The network that the prototype layer learns during training naturally comes with an explanation for each prediction. We also introduce Shapley Additive Explanation (SHAP) values to explain each feature importance that significantly contributes to the model’ predictions. This study employs cross-validation and the concordance index for assessing the predicting effect. Stringent cross-verification testing processes on two cancer methylation data sets demonstrate that the developed approach is effective.