We propose a machine learning (ML)-based method aimed at accelerating convergence to global solutions for the Alternating Current Optimal Power Flow (AC-OPF) problem. Our method enhances the efficiency of the optimality-based bound tightening (OBBT) algorithm, known in the literature for effectively tightening variable bounds in the non-convex AC-OPF problem. While the OBBT algorithm provides nearly global optimum solutions through tight convex relaxations, its inherent computational burden persists, even for medium-scaled power networks. The proposed novel ML-based policy, integrated into the OBBT algorithm, replaces the exhaustive OBBT without compromising optimality guarantees. This policy dynamically selects a subset of variables whose sequential bound tightening contributes to tighten the convex relaxation of the AC-OPF problem. To this end, we leverage historical data to learn a correlation between the load profiles and variable subsets via their rankings. Our policy, coupled with the parallel implementation of the OBBT algorithm, facilitates the discovery of near-global optimal solutions in significantly reduced computation times. We demonstrate this through detailed numerical experiments on medium to large-scale instances with up to 3,375 buses. Across the held-out set of instances, we observe up to 20x speed-up in run times of the ML-accelerated OBBT algorithm. To our knowledge, this is the first ML-based approach to demonstrate such large speed-ups for tightening convex relaxations on realistic large-scale power grid instances.