The presented research encompasses the synthesis of data analysis, machine learning, and principles of explainable artificial intelligence. The study investigates chaotic transformations that affect the performance and interpretability of artificial intelligence models in complex systems. Three different chaotic systems, including Lorenz, Chen, and Rossler, are used to transform features in the dataset. Subsequently, these transformed datasets are analyzed using various machine learning algorithms such as Random Forest, Decision Tree, and CatBoost. Performance metrics are calculated to evaluate the effectiveness of each combination. Based on these findings, the Rossler chaotic system and CatBoost algorithm yielded the best results. The effects of the transformed data on class labels are elucidated using different explainable artificial intelligence models including ELI5, DALEX, and SHAP. In ELI5 and SHAP models, the feature most influencing class labels is identified as the y column. In the DALEX model, the largest impact is observed in the z column. Future studies aim to enhance the understanding and predictive capabilities of the model by integrating more chaotic systems and machine learning algorithms. Additionally, investigating the robustness of the proposed approach across various datasets and problem domains will ensure broader applicability and reliability.