Adaptive bitrate (ABR) algorithms are used to adapt the video bitrate based on the network conditions to improve the overall video quality of experience (QoE). Further, with the rise of multi-access edge computing (MEC), a higher QoE can be guaranteed for video services by performing computations over the edge servers rather than the cloud servers. Recently, reinforcement learning (RL) and asynchronous advantage actor-critic (A3C) methods have been used to improve adaptive bit rate algorithms and they have been shown to enhance the overall QoE as compared to fixed-rule ABR algorithms. However, a common issue in the A3C methods is the lag between behavior policy and target policy. As a result, the behavior and the target policies are no longer synchronized with one another which results in suboptimal updates. In this work, we present the deep reinforcement learning with an importance sampling based approach focused on edge-driven video delivery services to achieve an overall better user experience. We refer to our proposed approach as ALISA: Actor-Learner Architecture with Importance Sampling for efficient learning in ABR algorithms. ALISA incorporates importance sampling weights to give higher weightage to relevant experience to address the lag issues incurred in the existing A3C methods. We present the design and implementation of ALISA, and compare its performance to state-of-the-art video rate adaptation algorithms including vanilla A3C and other fixed-rule schedulers. Our results show that ALISA provides up to 25%-48% higher average QoE than vanilla A3C, whereas the gains are even higher when compared to fixed-rule schedulers.