Stochastic Gradient Descent (SGD) is a fundamental optimization method widely used in machine learning and related fields. This work revisits constant step-size SGD and its connection to Markov chains. We provide an asymptotic expansion of the averaged SGD iterates, highlighting the key effects of noise, initial conditions, and step-size choices on convergence behavior. Furthermore, we introduce Richardson-Romberg extrapolation as a technique to accelerate convergence towards the optimum, even in the presence of persistent oscillations around the optimal point. Our theoretical analysis is complemented by empirical results demonstrating the improvements obtained through this method. These insights allow for better understanding of both the bias-variance trade-off and the convergence behavior of SGD in high-dimensional, noisy environments.