The deaf community in India frequently communicates using Indian Sign Language (ISL). In order to provide successful communication and accessibility for the deaf community, precise recognition of Indian Sign Language (ISL) is essential in real-time communication. ISL is a real-time language, which makes it difficult to identify and interpret signs in video clips. We propose a unique method for real-time ISL recognition utilizing a Time Distributed Convolutional Neural Network (CNN) model to address this issue. Our approach makes use of an a substantial ISL video dataset that includes a variety of sign gestures and motions. By separately applying CNN filters to each frame of the video sequence, the Time Distributed CNN architecture enables temporal modeling. Our model is capable of accurately identifying and interpreting ISL indicators in real-time settings by extracting spatial information from individual frames and taking into account their temporal connections. Our Time Distributed CNN model obtains a significant accuracy of 89% on the ISL recognition after extensive testing and evaluation. This encouraging outcome indicates the capability of our method to precisely recognize and decipher signs in real-time ISL video streams.