Diabetic Retinopathy (DR) is a microvascular disease that affects retinal vessels. It is considered one of the most common diseases causing vision loss for diabetic patients. In the year 2000, about 4 million DR cases were estimated in the United States, while in 2010, the number of cases was 7.7 million, and this number is expected to be 14.6 million by 2050. The National Health and Nutrition Examination Survey (NHANES) conducted a visual analysis in 2005–2008. NHANES estimated that 29% of people aged 40 years or more had simultaneous DR. Additionally, it has been estimated that the prevalence of DR in the sample measured was 4% [1]. NHANES also declared that DR prevalence is approximately 32% in males, while it is about 26% in females and 39% in non-Hispanic black people versus 25% in non-Hispanic white people [1].
Optical Coherence Tomography (OCT) is an emerging technology that allows for the study of blood flow within the eye’s vascular structure [2]. OCT is a non-invasive technique that uses a low-coherence light to produce high-resolution, cross-sectional, and micrometer-scale images. The principle of OCT is based on optics theory by measuring the transmitted and reflected optical signals that contain time-of-flight information, which yields spatial information about scanned tissue [3].
The early stage of DR is known as non-proliferative diabetic retinopathy (NPDR). During this stage, the retina vasculature begins a process of change wherein vascular permeability and capillary occlusion increase. The advanced stage of DR is called proliferative diabetic retinopathy (PDR), in which vitreous hemorrhage is present. During this stage, the abnormal vessels commence bleeding into the vitreous humor and may result in tractional retinal detachment in the patient’s eye, which causes vision impairment. By obtaining OCT angiography images, ophthalmologists are able to detect diabetic eyes that have a potential risk of developing retinopathy [4]. Developing an OCT computer-aided detection (CADe) software will assist the ophthalmologists in diagnosing patients in an accurate, fast, and safe manner that will protect diabetic eyes from vision loss at early stages.
In this paper, we have designed an automated system to classify the OCT into normal images and images with DR. The system was trained by inserting 160 images with their class and tested by inserting 54 images. Two types of classifiers were used: Support Vector Machine (SVM) and k-Nearest Neighbor (KNN).
Literature review
Recent studies have addressed the automated classification of OCT images by extracting the images’ features and using algorithms for classification or by segmentation. Priya et al. [5] proposed a system to diagnose diabetic retinopathy disease by using 350 fundus images. The used images were collected from “Aravind Eye Hospital and Postgraduate Institute of Ophthalmology”. The images were produced from the fundus camera in RBG form. The authors started by preprocessing the images to make them suitable for the machine learning system. The images were converted into gray-scale images. Then, in order to enhance the images’ contrast, they were subjected to adaptive histogram equalization. After that, the Matched Filter Response (MFR) and Discrete Wavelet Transform (DWT) were applied to reduce the noise and the images’ size. The authors extracted some features from the images subjected to their study, such as the blood vessels, NPDR hemorrhage, and PDR exudates, by using image segmentation. They applied three classifiers: Probabilistic Neural Network (PNN), Bayesian and Support Vector Machine (SVM) classifiers. The best results achieved for the SVM classifier were 96, 98 and 97.6 for specificity, sensitivity and accuracy respectively. 28.6% of the dataset was used as a training set, while the remaining 72.4% was used for training. If more data had been used for training, that could improve the performance.
Mahendran Gandhi et al. [6] used a gray-level co-occurrence matrix (GLCM) to extract the input features that feed the SVM classifier. They tried to build an automated method by using morphological operators and SVM classifiers on non-dilated color fundus retinal images to detect the exudates. The used images were five fundus images in JPEG format with size 2196 × 1958 pixels. The SVM classifier was used to examine the disease’s severity, whether the effect on the patient’s eye was moderate or severe. The used classifiers' results were that all five images were diagnosed with an abnormality, with three severely affected by exudates and two moderately affected.
Sohini Roychowdhury et al. [7] introduced a computer-aided screening system named DREAM. DREAM uses fundus images collected from two databases: the DIARETDB1 dataset and the MESSIDOR dataset in order to differentiate the DR images from the normal ones and generate a severity grade. Some classifiers were used, such as AdaBoost, Support Vector Machine (SVM), the Gaussian Mixture Model (GMM), and k-Nearest Neighbor (kNN). AdaBoost helped in the reduction of extracted features to 30 selected features out of 78. The feature reduction decreased the average computing time from 59.54 s to 3.46 s. The DREAM system achieved a sensitivity, specificity and AUC of 100%, 53.1% and 0.904, respectively.
Ahmed El Tanboly et al. [8] developed a DR detection system by using OCT images in three stages by using different segmentation and classification techniques. They extracted three main features to quantify the following from the segmented OCT images: “reflectivity", “curvature", and “thickness" of retinal layers. The segmented layers are characterized by a function used to describe the random distribution, called cumulative-probability distribution function (CDF), of its extracted features. The used classifier has been trained in order to select the distinctive features of retinal layers and to detect the DR by using their CDFs. The system results were 83%, 92% and 100% for sensitivity, accuracy and specificity respectively.
Mohammed Ghazal et al. [9] proposed a CADe system for detecting NPDR in the early stages by using OCT images. The built system consists of four primary stages: preprocessing, feature extraction, system training, then diagnosis and testing. The preprocessing stage contains the segmentation of retinal OCT images into 12 layers. The extracted layers are aligned by using layer number 6 outer nuclear layer ONL as reference. The output of the preprocessing stage is fed into the convolutional neural networks (CNNs) for training and evaluation. The best results have been acquired by using the proposed CNN, where the sensitivity, specificity and accuracy were 100%, 88% and 94% respectively. It has not been revealed how accurate the alignment with the y-axis is and how this affects the final result.
Peyman Gholami et al. [10] proposed an automated classification method to identify eyes with an ocular disease like DR, Age-related Macular Degeneration (AMD) or Macular Hole (MH), or normal eyes, from processing OCT images. The images were collected at Sankara Nethralaya (SN) eye hospital, Chennai, India. The images were preprocessed by removing the noise by using a wavelet-based denoising technique. Additionally, the images were resized into 500 × 750 pixels. The authors relied on extracting the LBP features to feed the used classifiers. The used feature selection reduced the used features from 375000 into 16383 features. They used SVM, random forest and multiphase method classifiers. The achieved result from classifying the normal and abnormal images was AUC 98.6%, where the used multiphase classifier achieved AMD, DR, and MH AUC as 100%, 95% and 83% respectively. The proposed system has perfect AMD detection, but it needs more improvement for detecting MH.
Muhammad Awais et al. [11] worked on a system for separating Diabetic Macular Edema (DME) OCT images from normal ones. The used images were collected from the Singapore Eye Research Institute (SERI). They used a pre-trained CNN, and the features were extracted at different layers by using a model involving visual graphic geometry with 16 weight layers (VGG16). They carried out four experiments with noise removal, image cropping, both and neither. The best results were obtained by using images with no noise removal or cropping and applying the kNN classifier. The results were 93% accuracy, 87% sensitivity and 100% specificity. The authors do not reveal the number of images used in the experiments.
Xuechen Li et al. [12] developed an automated system called “OCTD_Net” for separating DR images from normal ones by using OCT images. The proposed system classifies whether the image is normal or with DR and assigns value 1 for the DR patients with changes in the thickness and reflection of retinal layers and 0 for DR images that do not display any significant changes. The used features in their system were the optical reflection of retinal layers (gray-level intensity of OCT images) and the retinal layers’ thickness (pixels). Softmax was the used classifier in their experiment. The system was found to have a sensitivity of 0.90, an accuracy of 0.92 and a specificity of 0.95. The advantage of this system compared with others is the ability to classify the severity of DR where present.
Khaled Alsaih et al. [13] proposed a system for separating DME OCT images from normal ones. They used 32 OCT volumes containing more than 3800 images. The used technique was based on the evaluation of intraretinal cystoid space formation, hard exudates, retinal thickening, and subretinal fluid. The used features were local binary pattern (LBP) features and extraction of a histogram of oriented gradients within a multiresolution approach, bag of words (BoW) representations and principal component analysis (PCA). The used classifiers were random forest and SVM. The achieved sensitivity and specificity were both 87.50%. Because of missing detection of positive cases, the system is not reliable enough to be used in clinical applications.
Yo-Ping Huang et al. [14] proposed a method for detecting DR by using fundus images. They ranked the DR attributes by applying a fuzzy analytical network process from most to least DR-relevant factors. The transformed fuzzy neural network classifier was used to improve the classification process. The associated rules among the selected attributes of DR were extracted to reveal their degree of severity and importance. The used novel system with the newly proposed models B and C has improved the classification quality for both training and testing sets where the achieved AUC is 1.0.
Table 1
A comparison of the proposed methods by comparing the key feature(s) used and the used classifiers.
# | Authors | Year | Key Features | Classifiers |
1 | [5] | 2013 | Thresholding Morphological processing algorithms | PNN Bayesian Classification SVM |
2 | [6] | 2013 | entropy, contrast, correlation, energy, homogeneity and dissimilarity | SVM |
3 | [7] | 2014 | 30 features used | GMM kNN SVM AdaBoost |
4 | [8] | 2017 | Quantifying “reflectivity", “curvature", and “thickness" | Deep Fusion Classification Network (DFCN) |
5 | [9] | 2019 | Patches extraction | CNN |
6 | [10] | 2018 | LBP | random forest with SVM |
7 | [11] | 2017 | - | kNN |
8 | [12] | 2019 | Thickness and re- flection of retinal layers | Softmax |
9 | [13] | 2017 | LBP | PCA SVM |
10 | [14] | 2019 | - | Transformed Fuzzy Neural Network |
Table 1. Comparison between the previous work including our work