A new system of Brain-Computer Interfaces (BCI) is outlined in this paper that can classify the imagined digits (0-9) from a subject through the use of EEG signals captured from industry-standard EEG signal detectors like Muse, etc. [1] training systems based on deep learning models. For this purpose, we used Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNNs) combined as CNN-RNN to capture the spatial and temporal aspects found in electroencephalography signals in a single model. Accurate filtering techniques were applied in this process which include namely bandpass filtering, Independent Component Analysis (ICA), baseline correction, Common Average Reference (CAR), artifact rejection, channel interpolation and z-score normalisation using the standardised 10/20 electrode placement system for non-invasive EEG acquisition. The model is trained on a pre-existing dataset of Visually Evoked Potentials (VEPs) linked with visual stimuli for digits 0 to 9 which contains over 2.5 GB of train-test samples of EEG from MindBigData [1]. The experimental results show that our proposed CNN+RNN scheme outperforms previous state-of-the-art techniques by obtaining an average accuracy rate of 73% on the test set. This research work contributes to advancing EEG-based BCIs, showcasing the potential for real-time digit recognition tasks.