Edge computing is an emerging computing paradigm that enables data processing to be performed closer to where the data is generated, instead of relying on centralized cloud computing resources. One of the key challenges in edge computing is to efficiently balance the workload across different devices and scenarios, in order to optimize resource utilization and minimize response time. In this study, we investigate the use of dueling Q-learning for workload balancing in edge computing, with a focus on scenarios where the workload is randomly distributed across different devices. We compare the performance of dueling Q-learning with a random distribution approach and analyze the results using visualizations and statistical measures. We present two implementations of dueling Q-learning: a centralized approach and a distributed approach. We show that both approaches are effective in balancing the workload and outperform the random distribution approach in terms of workload distribution, convergence rate, and response time. We also compare the performance of the two dueling Q-learning approaches and show that the distributed approach is more scalable and efficient in large-scale scenarios. Our results demonstrate the potential of dueling Q-learning as a promising approach for workload balancing in edge computing. The study provides insights into the benefits and limitations of the approach and suggests future directions for research in this area.