Event Details

EFL-Net: AN Efficient Lightweight Neural Network For Retinal Vessel Segmentation

Presenter: Nasrin Akbari
Supervisor:

Date: Mon, September 12, 2022
Time: 09:00:00 - 10:00:00
Place: ZOOM - Please see below.

ABSTRACT

Join Zoom Meeting
https://uvic.zoom.us/j/89872646668

Meeting ID: 898 7264 6668
One tap mobile
+16475580588,,89872646668# Canada
+17789072071,,89872646668# Canada

Dial by your location
        +1 647 558 0588 Canada
        +1 778 907 2071 Canada
Meeting ID: 898 7264 6668
Find your local number: https://uvic.zoom.us/u/keAchxgsfB

 

ABSTRACT

The shape of retinal vessels reflects a patient's overall health and aids in diagnosing a number of diseases, including diabetes and hypertension. The blindness of patients can be prevented with proper detection and treatment of specific illnesses. Deep learning algorithms have recently achieved the best results compared to other techniques for retinal vessel segmentation. However, a major drawback with these methods is that these models require a large number of parameters and computations. In this paper, we proposed a lightweight neural network for retinal blood vessel segmentation named efficient and fast lightweight network (EFL-Net). EFL-Net introduces the ResNet branches shuffle block (RBS block) which has a high capacity to extract features from several granularities, and the Dilated Separable Down block (DSD block), which enhances the network's receptive field. Both suggested blocks are lightweight that can be plugged into the state-of-the-art backbone of  CNN models. In addition, we adopt PixelShuffle in the decoder of our model as an upsampling layer, which has a greater capacity than deconvolution and interpolation approaches for learning features. We provide the findings from two datasets: Drive and CHASEDB1. We achieved the most significant performance results on CHASEDB1 and DRIVE datasets with an F1 measure of 0.8351 and 0.8242, respectively. Compared to other networks such as ladder net with 1.5 M parameters and DCU-Net with 1 M parameters, our model has fewer parameters (0.340 M).