Unmanned Aerial Vehicles (UAVs) play a crucial role in tracking-based applications, particularly in real-time situations such as rescue missions and surveillance. However, tracking objects with occlusion can be challenging, as it involves reidentifying objects with consistent identities. To address this issue, a novel multi-class object tracking methodology with occlusion handling has been proposed. This methodology employs You Only Look Once Neural Architecture Search (YOLO-NAS) and confluence-based object detection. YOLO-NAS has demonstrated superior detection with quantization-aware blocks and selective quantization, which is utilized for object tracking. Additionally, a Densely Connected Bidirectional LSTM tracker has been developed to use the feature representation and object locations from the detector. Furthermore, the methodology incorporates occlusion handling object association to re-identify objects in scenarios with occlusion or out-of-view situations. To evaluate the proposed framework, comparisons have been made with state-of-the-art models using UAV123, UAVDT, and VisDrone datasets. A detailed ablation study has been performed with UAV123 dataset. The proposed framework is observed to outperform other models with MOTA of 94.53%, Recall of 97.8%, Precision of 97.19%, F-score of 97.49% and Rel.ID of 9.26%.