Domains with high-stakes tasks include medical diagnosis, financial decision-making, autonomous vehicles, criminal justice, disaster response, search and rescue operations, and others. In high-stakes tasks the effects of incorrect decisions or predictions can cause significant harm to individuals or society. This paper presents (1) major topics of research in interpretable AI/ML models for high-stakes tasks with human-in-the-loop, and (2) motivates and explains topics of emerging importance in this area. It is widely accepted that model explanations should accurately describe the model’s inner decision making and be convincing to the user. However, quite often these properties are missing. revealing only a quasi-explanation of the model, thus not serving the goal of true model explanation. This paper presents both benefits and deficiencies of current methods and ways to overcome deficiencies with a focus on high-stakes AI/ML tasks, where incorrect decisions or predictions can result in devastating consequences. While a human-in-the-loop is necessary for solving explainability problems, the success heavily depends on human cognitive abilities and limitations. This makes human visual knowledge discovery with granular computing critical for the progress of interpretable machine learning. Therefore, a significant part of the paper is devoted to presenting interpretable ML models built with human visual knowledge discovery methods.