In data science and machine learning, imbalanced data poses a significant challenge. This study presents a self-balancing strategy integrating traditional (randomly duplicating data from the minority class) and generative (creating novel samples from the minority class' features space oversampling, undersampling techniques, and hyperparameter optimization to enhance automated machine-learning pipelines. Through a systematic grid search methodology and by taking multiple datasets into account, the research validates the effectiveness of integrated sampling techniques in consistently improving classification performance in diverse scenarios. The proposed approach contributes to a refined understanding of addressing class imbalance, emphasizing the importance of a unified strategy. Therefore, this research presents a comprehensive proposal for effectively handling imbalanced datasets and advancing the development of more reliable machine learning applications. The experiments' results highlighted the effectiveness of sampling methods and showed improvements of up to almost six percentage points compared to the baseline. We made the code, experiment reproduction instructions, and outcomes publicly available on GitHub.