Credit card fraud costs credit card companies billions of dollars annually. Privacy concerns prevent most banks from sharing transaction data. Federated Learning enables multiple companies to train a unified model while preserving the privacy of sensitive data. The scarcity of credit card fraud samples has posed a challenge for Federated Learning (FL). To address this challenge, this paper proposes the Federated Learning Synthetic Minority Oversampling Technique (FL-SMOTE). It employs both partially and fully homomorphic encryption schemes, leveraging their strengths to enhance performance and security. Partially homomorphic encryption is used for Euclidean distance summation, ciphertext comparison, and privacy-preserving ranking of Euclidean distances. The Cheon-Kim-Kim-Song (CKKS) method is used to generate synthetic minority class samples and perform secure aggregation in federated learning. CKKS approximation methods are used to balance computational complexity and accuracy. The logistic regression model is adapted to demonstrate how the CKKS scheme can be seamlessly integrated into the training process. A quadratic term is added to the client’s loss function to regularize the discrepancy between the local model and the accelerated global model. Finally, the proposed design is evaluated on two publicly available datasets. The experiments demonstrate that the FL-SMOTE algorithm enhances training results and achieves the oversampling objective.