In this article, we explore the natural problem of class imbalance in the heart disease datasets. Aiming for a comprehensive examination of the class balancing techniques we train and test our models on three different datasets all suffering from different degree of class imbalance. The Healthcare dataset (mid size) and the BRFSS-2015 dataset (large dataset) having huge class imbalance, and the Iraq hospital dataset with mild class imbalance. Feature selection is done using backward elimination and Logistic Regression (LR), Adaptive Boosting (ADB), XGBoost (XGB), Random Forest (RF), and Ensemble classifiers are combined with the four popular class balancing techniques namely Random Under Sample, Random Over Sample, SMOTETomek, and ADASYN. As proven by results, Random Under Sampling and Random Over Sampling combined with Logistic Regression and ADA Boost proves the best combination for the highly imbalanced Healthcare Stroke Dataset and BRFSS-2015 dataset by producing almost 15%-20% increase in the in F1-Score of the minority class and AU-ROC making the classifier model more robust but with a minor decline in the overall accuracy. For the mildly imbalanced Iraq Hospital dataset the same combination marginally improves the F1-Score and AU-ROC without compromising on the overall accuracy and the F1-Score of majority class.