Comparison of U-NET and ELU-NET for Pancreatic Cancer Medical Image Semantic Segmentation

  • Algi Fari Ramdhani Informatics Engineering Study Program, Department of Computer and Informatics Engineering, Politeknik Negeri Bandung, Kabupaten Bandung Barat, Jawa Barat 40559, Indonesia
  • Yudi Widhiyasana Informatics Engineering Study Program, Department of Computer and Informatics Engineering, Politeknik Negeri Bandung, Kabupaten Bandung Barat, Jawa Barat 40559, Indonesia
  • Setiadi Rachmat Informatics Engineering Study Program, Department of Computer and Informatics Engineering, Politeknik Negeri Bandung, Kabupaten Bandung Barat, Jawa Barat 40559, Indonesia
Keywords: U-NET, ELU-NET, Lightweight Model, Semantic Segmentation

Abstract

Medical image analysis for semantic segmentation using deep learning technology has been extensively developed. One of the notable architectures is U-NET, which has demonstrated high accuracy in segmentation tasks. Further advancements have led to the development of ELU-NET, which aims to enhance model efficiency. ELU-NET achieves relatively good accuracy; however, further comparative analysis of both models is necessary. The comparison between these models is based on accuracy, storage usage, and processing time in performing semantic segmentation of pancreatic cancer images. The pancreatic cancer images utilized in this study are sourced from the PAIP 2023 Challenge, consisting of hematoxylin and eosin (H&E)-stained images. Experiments were conducted by varying the number of filters and model depth for both architectures. The evaluation was performed using a dataset of 57 pancreatic cancer images. The experimental results indicated that U-NET achieved the highest accuracy at 92.8%, slightly outperforming ELU-NET, which attained 89.7%. However, ELU-NET is significantly more efficient in terms of storage usage (8.1 MB for ELU-NET compared to 93.31 MB for U-NET) and processing time (4.0 s for ELU-NET and 5.3 s for U-NET). Although ELU-NET exhibited slightly lower accuracy than U-NET, it surpassed U-NET considerably in terms of storage efficiency (by 85.21 MB) and processing speed (by 1.3 s). These findings suggest that ELU-NET is not superior to U-NET in accuracy. However, given the storage size ratio of 1:11.51 and the processing time ratio of 1:1.325 between ELU-NET and U-NET, the 3.1% accuracy difference represents a reasonable trade-off.

References

World Health Organization (WHO), “Cancer.” Access date: 16-Jun-2024. [Online]. Available: https://www.who.int/news-room/fact-sheets/detail/cancer

J. Cai et al., “Advances in the epidemiology of pancreatic cancer: Trends, risk factors, screening, and prognosis”, Cancer Lett., vol. 520, pp. 1–11, Nov. 2021, doi: 10.1016/j.canlet.2021.06.027.

J.S. de Moor et al., “Cancer survivors in the United States: Prevalence across the survivorship trajectory and implications for care,” Cancer Epidemiol. Biomarkers Prev., vol. 22, no. 4, pp. 561–570, Apr. 2013, doi: 10.1158/1055-9965.EPI-12-1356.

S. Bharati, P. Podder, and M.R.H. Mondal, “Artificial neural network based breast cancer screening: A comprehensive review,” Int. J. Comput. Inf. Syst. Ind. Manage. Appl., vol. 12, pp. 125–137, May 2020.

D. Veiga-Canuto et al., “Comparative multicentric evaluation of inter-observer variability in manual and automatic segmentation of neuroblastic tumors in magnetic resonance images,” Cancers, vol. 14, no. 15, pp. 1–15, Aug. 2022, doi: 10.3390/cancers14153648.

M.H. Aziz and A.A. Abdulla, “Computer-aided diagnosis for the early breast cancer detection,” UHD J. Sci. Technol., vol. 7, no. 1, pp. 7–14, Jan. 2023, doi: 10.21928/uhdjst.v7n1y2023.pp7-14.

A. Rakhlin et al., “Breast tumor cellularity assessment using deep neural networks,” in 2019 IEEE/CVF Int. Conf. Comput. Vis. Workshop (ICCVW), 2019, pp. 371–380, doi: 10.1109/ICCVW.2019.00048.

M. Torres-Velazquez, W.-J. Chen, X. Li, and A.B. McMillan, “Application and construction of deep learning networks in medical imaging,” IEEE Trans. Radiat. Plasma Med. Sci., vol. 5, no. 2, pp. 137–159, Mar. 2021, doi: 10.1109/TRPMS.2020.3030611.

E. Sudarshan et al., “Deep learning for the detection and classification of brain tumors using CNN,” AIP Conf. Proc., vol. 2971, no. 1, Jun. 2024, Art no. 020031, doi: 10.1063/5.0196072.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation”, in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. Wells., A. Frangi, Eds., Cham, Swiss: Springer, 2015, pp. 234–241, doi: 10.1007/978-3-319-24574-4_28.

S. Arvind, J.V. Tembhurne, T. Diwan, and P. Sahare, “Improvised light weight deep CNN based U-Net for the semantic segmentation of lungs from chest x-rays,” Results Eng., vol. 17, pp. 1–9, Mar. 2023, doi: 10.1016/j.rineng.2023.100929.

Y. Deng, Y. Hou, J. Yan, and D. Zeng, “ELU-Net: An efficient and lightweight U-Net for medical image segmentation,” IEEE Access, vol. 10, pp. 35932–35941, Mar. 2022, doi: 10.1109/ACCESS.2022.3163711.

K. Manasa and G.V. Murthy, “Skin Cancer Detection Using VGG-16,” Eur. J. Mol. Clin. Med., vol. 8, no. 1, pp. 1419–1426, Jan. 2021.

S. Jha et al., “Neutrosophic image segmentation with Dice Coefficients,” Measurement, vol. 134, pp. 762–772, Feb. 2019, doi: 10.1016/j.measurement.2018.11.006.

R. Awan et al., “Glandular morphometrics for objective grading of colorectal adenocarcinoma histology images,” Sci. Rep., vol. 7, pp. 1–12, Dec. 2017, doi: 10.1038/s41598-017-16516-w.

S. Roy, K. Bhalla, S. Patel, and R. Patel, “Statistical analysis of histogram equalization techniques for medical image enhancement: A brief study,” SSRN Electronic Journal, Mar. 2022, doi: 10.2139/ssrn.4049614.

X. Zhang et al., “FlipCAM: A feature-level flipping augmentation method for weakly supervised building extraction from high-resolution remote sensing imagery,” IEEE Trans. Geosci. Remote Sens., vol. 62, pp. 1–17, Jan. 2024, doi: 10.1109/TGRS.2024.3360276.

I. Saputra and D.A. Kristiyanti, Machine Learning untuk Pemula. Bandung, Indonesia: Informatika, 2022.

H. Li et al., “Keeping deep learning models in check: A history-based approach to mitigate overfitting,” IEEE Access, vol. 12, pp. 70676–70689, May 2024, doi: 10.1109/ACCESS.2024.3402543.

D. Ogwok and E.M. Ehlers, “Jaccard index in ensemble image segmentation: An approach,” in Proc. 2022 5th Int. Conf. Comput. Intelli. Intell. Syst. (CIIS '22)., 2022, pp. 9–14, doi: 10.1145/3581792.3581794.

Published
2025-02-26
How to Cite
Algi Fari Ramdhani, Yudi Widhiyasana, & Setiadi Rachmat. (2025). Comparison of U-NET and ELU-NET for Pancreatic Cancer Medical Image Semantic Segmentation. Jurnal Nasional Teknik Elektro Dan Teknologi Informasi, 14(1), 44-51. https://doi.org/10.22146/jnteti.v14i1.15262
Section
Articles