Sign language is the most prominent mode of communication for people who have hearing problems. Continuous sign language recognition is a poorly supervised job that involves identifying sign gestures with no prior knowledge of the temporal boundaries between consecutive signs. Most modern techniques focus on the extraction of visual features, with no use of text or contextual information to boost recognition accuracy. Furthermore, in the field of sign language recognition, the potential of deep generative models to generate realistic sign language images has not been examined in depth. To achieve that aim, the Recognition System for Indian Sign Language is a novel approach for continuous sign language recognition using a generative adversarial network architecture. The system is trained on the self-created dataset, which includes numbers from 0 to 9, alphabets from A to Z, and 50 distinct words with static gestures. The proposed approach is based on the idea of using a generative adversarial network (GAN) to generate realistic sign language gestures, which are then used to train a recognition model. By training these networks together, the generator learns to produce increasingly realistic gestures, while the discriminator learns to distinguish between real and synthetic generators with increasing accuracy. The network architecture involves the use of DCGAN and SRGAN, while the transfer learning algorithms used are ResNet-50, VGG-19, and AlexNet. The transfer learning algorithms enable the network to learn from pre-trained models and achieve better accuracy in sign language recognition. Overall, the Recognition System for Indian Sign Language is a promising approach for continuous sign language recognition. By using a combination of deep generative models and transfer learning algorithms, the network can accurately detect and translate sign language gestures into textual language. The proposed approach shows recognition accuracy of 92.5% and robustness, and it has the potential to improve the recognition of sign language gestures in real-world scenarios. This has the potential to greatly improve communication for individuals who are speech and hearing impaired.