In this paper, we propose an efficient strategy for improving the multi-objective steepest descent method proposed by Fliege and Svaiter (Math Methods Oper Res, 2000, 3: 479--494). The core idea behind this strategy involves incorporating a positive modification parameter into the iterative formulation of the multi-objective steepest descent algorithm in a multiplicative manner. This modification parameter captures certain second-order information associated with the objective functions. We provide two distinct methods for calculating this modification parameter, leading to the development of two improved multi-objective steepest descent algorithms tailored for solving multi-objective optimization problems. Under reasonable assumptions, we demonstrate the convergence of sequences generated by the first algorithm toward a critical point. Furthermore, for strongly convex multi-objective optimization problems, we establish the linear convergence to Pareto optimality of the sequence of generated points. The performance of the new algorithms is empirically evaluated through computational comparisons on a set of multi-objective test instances. The numerical results underscore that the proposed algorithms consistently outperform the original multi-objective steepest descent algorithm.