Articles written in Sadhana
Volume 47 All articles Published: 8 March 2022 Article ID 0049
Performance analysis of deep neural networks through transfer learning in retinal detachment diagnosis using fundus images
SONAL YADAV SANJOY DAS R MURUGAN SUMANTRA DUTTA ROY MONIKA AGRAWAL TRIPTI GOEL ANURAG DUTTA
Retinal detachment (RD) is a severe condition that causes decreased visual acuity and blindness if left untreated timely. The early screening and identification of retinal detachment can ameliorate the successful rate of visible results and RD. The manual screening of retinal detachment is a labor-intensive and timeconsuming task. This paper is concerned with pre-trained deep learning networks for feature extraction and classification. Deep learning models need large amounts of training data since they involve many parameters. This is a severe problem in medical informatics where the amount of data available is very low, and ground truth data is a small fraction of the same. The domain of Retinal Detachment (RD) is no different. Typical public domain databases related to RD typically have a few hundred images, leading to fitting issues for deep learningmodels. This work investigates the role of transfer learning for feature extraction and classification of RD and Non-RD color fundus images. We have also analyzed the performance of different deep neural networks through fundus imaging to detect (RD) eyes and Non-RD eyes. The deep convolutional networks such as AlexNet, InceptionV3, GoogleNet, VGG19, DenseNet, and ResNet50 were trained and tested on publically available datasets of RD and Non-RD fundus images. A ResNet50 framework through transfer learning shows the bestclassification performance in terms of Accuracy, Sensitivity, Specificity, Precision, and F1 score values of 99.50%, 99.00%, 99.99%, 99.99%, and 99.49%, respectively, and best for detecting RD and Non-RD fundus images compared to other learning models. This study inferred the promising results for a diagnostic system for retinal detachment with relatively high sensitivity and specificity.
Volume 48, 2023
Continuous Article Publishing mode
Click here for Editorial Note on CAP Mode