Recently, a new computing paradigm, edge computing, has gained attention because of high-throughput and short-delay task offloading for large-scale Internet-of-Things (IoT) applications. Computational task offloading in the IoT faces significant challenges relatedto energy consumption, performance, security, and latency. Traditionalmachine-learning cloud-based approaches may not meet the real-timetask offloading and processing requirements of IoT applications. To de-termine the potential of fully connected vehicles, the Internet of Vehicles(IoV) requires high-bandwidth and low-latency services. IoT devices inIoV networks have limited resources to perform large and complex op-erations. This issue of limited resources can be solved by dynamic task offloading using cloud and edge computing to improve IoV performance.This paper proposes an innovative Deep Neural Network (DNN) basedReal-Time and Low Latency dynamic computational offloading alloca-tion method that combines the strengths of Cloud and Edge computingwith low-latency computational offloading for IoT applications, such asIoV, Intelligent Transportation Systems, and smart cities. The proposedDNN model is trained to make decisions about where and when to offloadcomputational tasks dynamically based on input features like availableresources, vehicle network conditions, and speed. Finally, for implement-ing and evaluating the performances of proposed low latency, real-timedynamic computational offloading real traffic data are used to build upthe simulation scenarios. Experimental results indicate that our methodachieves improved performance in terms of latency, dynamic efficient re-source allocation, and computational task offloading compared to thetraditional machine learning cloud-based task offloading method.