Emotions are a crucial aspect of daily life and play vital roles in human interactions as well as in different other domains such as entertainment, healthcare etc. Perhaps, the use of physiological signals can increase the clarity, objectivity, and reliability of communicating emotions. Therefore, because of these reasons researchers have substantially implemented the idea of using physiological signals to recognize the emotions in recent past. Further, electroencephalography (EEG) is the most popular means of recording brain activity and owing to its diversified applications in variety of domains EEG signals have been widely used to recognize the emotions nowadays. Moreover, EEG signals based emotion recognition techniques are non-invasive in nature and also provides high temporal resolution. However, several crucial attempts have been made by the researchers to recognize the emotions using EEG signals. But, there is still a need for an accurate and effective technique for emotion classification based on EEG signals and undoubtedly, developing a pragmatic and effective algorithm in the pursuit of emotion recognition is a challenging task. This paper proposes an innovative Generative Adversarial Network (GAN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) based hybrid model i.e., 'GANFIS' for EEG signals based emotion recognition. The proposed hybrid model renders a layered structure. The first layer of the model consists of \(\:N\) GANs systems in parallel and further the second layer consists of \(\:N\) ANFIS in parallel where \(\:N\) is equal to the types of emotions to be recognized. The objective of designing of this hybrid model is to enhance the recognition accuracy of the emotions consisting of three and four classes. Perhaps, this is an arduous task for the existing state-of-art techniques. In this proposed hybrid model, the most appropriate distribution for classification are inputted to the first layer i.e., to the GAN structures and subsequently the first layer outputs the extracted features. These extracted features possess the succinct characteristics to recognize the emotions. Further, these extracted features are given as input to the second layer i.e., ANFIS for training. Further, the outputs of the second layer are integrated and thus create the feature vector. These feature vectors are given as input to the third layer that is the adaptive layer. Each layer is properly trained. Furthermore, the third layer outputs the classes of emotions. In addition, the performance of proposed hybrid model is tested and validated on two benchmark datasets. These are: the Feeling Emotion dataset and DEAP dataset. The recognition accuracies obtained from the proposed hybrid model for these datasets are 74.69% and 96.63% respectively. The obtained emotions recognition accuracies superior to accuracies obtained from other state-of-art techniques.