Automatic analysis of retinal images to aid in the diagnosis and grading of diabetic retinopathy

  1. Romero Oraá, Roberto
Zuzendaria:
  1. María García Gadañón Zuzendaria

Defentsa unibertsitatea: Universidad de Valladolid

Fecha de defensa: 2022(e)ko abendua-(a)k 02

Epaimahaia:
  1. Begoña Acha Piñero Presidentea
  2. Jesús Poza Crespo Idazkaria
  3. Rui Manuel Bernardes Kidea

Mota: Tesia

Laburpena

Many eye-related diseases and conditions where the blood circulation or the brain are affected manifest themselves in the retina. This is the case of diabetic retinopathy (DR), the most common complication of diabetes mellitus (DM), with a prevalence above 22% among the diabetic population. DR has become one of the leading causes of preventable blindness in the adult working population. In this condition, the hyperglycemia induced by DM is known to damage the walls of the retinal vessels causing several abnormalities in the retina. During the initial stages, these abnormalities include red lesions (RLs), such as hemorrhages (HEs) and microaneurysms (MAs), and hard exudates (EXs), yellowish-white deposits of lipoproteins and other proteins. Visual loss cannot be recovered but can be prevented from the early stages of DR, when the treatments are effective. Therefore, early diagnosis is paramount. However, DR may be clinically asymptomatic until the advanced stage, when vision is already affected and treatment may become difficult. For this reason, diabetic patients should undergo regular eye examinations through screening programs. Due to its safety and cost-effectiveness, fundus imaging is the most established retinal imaging modality to conduct these examinations aimed at identifying the DR clinical signs. Traditionally, DR screening programs are run by trained specialists through visual inspection of the retinal images. However, this manual analysis is time consuming and expensive. With the increasing incidence of DM and the limited number of clinicians and sanitary resources, the early detection of DR becomes non-viable. Additionally, there is a certain subjectivity related to diagnosis (around 11% discrepancy between specialists). For all of these reasons, computed-aided diagnosis (CAD) systems are required to assist specialists for a fast, reliable diagnosis, allowing to reduce the workload and the associated costs. The complexity of DR diagnosis suggests that CAD systems should be divided into several stages, leading to an extremely wide field of research. Although multiple methods can be found in the literature to carry out each of these stages, they are not exempt from limitations and there is still room for improvement and techniques to explore. In this context, we hypothesize that the application of novel, automatic algorithms for fundus image analysis could contribute to the early diagnosis of DR. Consequently, the main objective of the present Doctoral Thesis is to study, design and develop novel methods based on the automatic analysis of fundus images to aid in the screening, diagnosis, and treatment of DR. In order to achieve the main goal, we collected 2107 fundus images to build a private database provided by the Instituto de Oftalmobiología Aplicada (IOBA) of the University of Valladolid (Valladolid, Spain) and the Hospital Clínico Universitario de Valladolid (Valladolid, Spain). Additionally, we used five retinal databases publicly available: DRIMDB, DIARETDB1, DRIVE, Messidor and Kaggle. Since all these databases have different characteristics and purposes, they have been used in different stages of the Thesis. This has allowed us to validate the proposed methods with variable fundus images obtained with different protocols, cameras, resolution and quality. The stages of fundus image processing covered in this Thesis are: retinal image quality assessment (RIQA), the location of the optic disc (OD) and the fovea, the segmentation of RLs and EXs, and the DR severity grading. RIQA was studied with two different approaches. The first approach was based on the combination of novel, global features derived from the spatial and spectral entropy-based quality (SSEQ) and the natural images quality evaluator (NIQE) methods together with sharpness and luminosity measures based on the continuous wavelet transform (CWT) and the hue-saturation-value (HSV) color model. Results achieved 91.46% accuracy, 92.04% sensitivity, and 87.92% specificity using the private database. We developed a second approach aimed at RIQA based on deep learning due to the great potential that this field is showing in computer vision in recent years. A convolutional neural network (CNN) with InceptionResNetV2 architecture was used to detect retinal images with satisfactory quality. Data augmentation and transfer learning were also part of this approach. We achieved 95.29% accuracy with the private database and 99.48% accuracy with the DRIMDB database. The location of the OD and the fovea was performed using a combination of saliency maps. Spatial relationships between the main anatomical structures of the retina as well as their visual features were considered. The proposed methods were evaluated over the private database and the public databases DRIVE, DIARETDB1 and Messidor. For the OD, we achieved 100% accuracy for all databases except Messidor (99.50%). As for the fovea location, we also reached 100% accuracy for all databases except Messidor (99.67%). The joint segmentation of RLs and EXs was accomplished by decomposing the fundus image into layers. Novel indicators, such as the reflective features of the retina and the choroidal vasculature visible in tigroid retinas, were proven useful for the classification of retinal lesions. Results were computed per pixel and per image. Using the private database, 88.34% per-image accuracy (ACCi), 91.07% per-pixel positive predictive value (PPVp), and 85.25% per-pixel sensitivity (SEp) were reached for the detection of RLs. Using the public database DIARETDB1, 90.16% ACCi, 96.26% PPVp, and 84.79% SEp were obtained. As for the detection of EXs, 95.41% ACCi , 96.01% PPVp, and 89.42% SEp were reached with the private database. Using the public database, 91.80% ACCi, 98.59% PPVp, and 91.65% SEp were obtained. An additional method was proposed for the segmentation of RLs based on superpixels. The Entropy Rate Superpixel algorithm was used to segment the potential RL candidates, which were classified using a multilayer perceptron neural network. Evaluating this method with the private database, we obtained 81.43% SEp, 86.59 PPVp, 84.04% SEi, 85.00% SPi, and 84.45% ACCi. Using the DIARETDB1 database, we achieved 88.10% SEp, 93.10% PPVp, 84.00% SEi, 88.89% SPi, and 86.89% ACCi. Finally, we proposed an end-to-end deep learning framework for the automatic DR severity grading. The method was based on a novel attention mechanism which performs a separate attention of the dark and the bright structures of the retina. The framework includes data augmentation, transfer learning and fine-tuning. We used the Xception architecture as a feature extractor and the focal loss function to deal with data imbalance. The Kaggle DR detection dataset was used for development and validation. The International Clinical DR Scale was considered, which is made up of 5 DR severity levels. Classification results for all classes achieved 83.70% accuracy and a Quadratic Weighted Kappa of 0.78. The combination of the different stages developed in this Doctoral Thesis form a complete, automatic DR screening system, contributing to aid in the early detection of DR. In this way, diabetic patients could receive better attention for their ocular health avoiding vision loss. In addition, the workload of specialists could be relieved while healthcare costs are reduced.