Context: Artificial Intelligence (AI) in the medical domain has achieved remarkable results on various metrics primarily due to recent advancements in computational capabilities. However, AI models used for Computer Aided Diagnosis (CAD) have limited acceptance and trust due to their typical {Blackbox} nature. Consequently, explainable AI (XAI) is needed to justify the trustworthiness of the machine's predictions.
Approach: In this work, we undertake a systematic review based on PRISMA guidelines on research articles investigating Alzheimer's Disease (AD) prediction with XAI for the past decade. The review was driven by carefully formulated research questions (RQ) which helped categorize AI models into different XAI methods(Post-hoc, Ante-hoc, Model-Agnostic, Model-Specific, etc) and frameworks (LIME, SHAP, GradCAM, LRP, etc.).
Summary: This categorisation provides broad coverage of the interpretation spectrum from intrinsic (model-specific, ante-hoc models) to complex patterns (model-agnostic, post-hoc models) and by taking local explanations to a global scope. Additionally, the study categorizes merit in terms of different forms of interpretation to provide in-depth insight into the factors that support the clinical diagnosis of AD. Finally, limitations and open research challenges are outlined, and possible prospects are presented.