The disagreement problem arises when multiple explainability (XAI) methods provide different local and global explanations for a machine learning model. This inconsistency can leave practitioners uncertain about which method to trust. This work explores approaches for identifying regions within the dataset where XAI methods are more likely to agree. Our findings indicate that XAI methods tend to agree more in the local areas than if applied to the entire dataset. Increased agreement among XAI methods enhances the transparency and trustworthiness of the model if deployed in the real world. We discuss the concept of explainability, the disagreement problem, and the metrics devised to measure agreement. We then present the results of our experiments and conclude with insights and implications for future research.