Comparison of Optimizer Use in White Blood Cell Classification Employing CNN

  • Dede Kurniadi Program Studi Teknik Informatika, Jurusan Ilmu Komputer, Institut Teknologi Garut, Garut, Jawa Barat 4415, Indonesia
  • Rifky Muhammad Shidiq Program Studi Teknik Informatika, Jurusan Ilmu Komputer, Institut Teknologi Garut, Garut, Jawa Barat 4415, Indonesia
  • Asri Mulyani Program Studi Teknik Informatika, Jurusan Ilmu Komputer, Institut Teknologi Garut, Garut, Jawa Barat 4415, Indonesia
Keywords: Adam’s Optimizer, Convolutional Neural Network, White Blood Cell Classification, RMSProp Optimizer, SGD Optimizer

Abstract

White blood cells are crucial components of the immune system responsible for combating infections and diseases. The classification and counting of white blood cells are typically performed manually by experienced operators or via automated cell analysis systems. The manual method is inefficient, time-consuming, and labor-intensive, while automated analysis machines are often expensive and require stringent sample preparation. This study aimed to compare the performance of three optimizers—root mean square propagation (RMSProp), stochastic gradient descent (SGD), and adaptive moment estimation (Adam)—in a white blood cell classification model using a convolutional neural network (CNN) algorithm. The dataset consisted of 12,392 images spanning four white blood cell classes: eosinophils, neutrophils, lymphocytes, and monocytes. The results indicate that the Adam optimizer achieved the best performance, with a training accuracy of 98.65% and an evaluation accuracy of 97.73%. Adam also outperformed the other optimizers in key metrics, including recall (97.43%), precision (97.42%), F1-score (97.42%), and specificity (99.11%). The AUC values for all classes exceeded 90%, demonstrating the model’s exceptional ability to distinguish between different cell types. The RMSProp optimizer yielded a training accuracy of 98.63%, whereas SGD achieved a lower training accuracy of 83.46%. This study highlights the significant impact of optimizer selection on CNN performance in white blood cell image classification, providing a foundational step toward the development of more accurate medical classification systems.

References

B.J. Bain, “Performing a blood count,” in Blood Cells: A Practical Guide, 6th ed. West Sussex, United Kingdom: John Wiley & Sons Ltd, 2022, ch. 2, pp. 17–63.

W. King, K. Toler, and J. Woodell-May, “Role of white blood cells in blood- and bone marrow-based autologous therapies,” Biomed Res. Int., vol. 2018, no. 1, pp. 1–8, Jul. 2018, doi: 10.1155/2018/6510842.

N. Dong, M. Zhai, J. Chang, and C. Wu, “A self-adaptive approach for white blood cell classification towards point-of-care testing,” Appl. Soft Comput., vol. 111, pp. 1–13, Nov. 2021, doi: 10.1016/j.asoc.2021.107709.

M.A. Ali, F. Dornaika, and I. Arganda-Carreras, “White blood cell classification: Convolutional neural network (CNN) and vision transformer (ViT) under medical microscope,” Algorithms, vol. 16, no. 11, pp. 1–17, Nov. 2023, doi: 10.3390/a16110525.

T.A. Sadoon and M.H. Ali, “An overview of medical images classification based on CNN,” Int. J. Curr. Eng. Technol., vol. 10, no. 6, pp. 900–905, Nov./Dec. 2020, doi: 10.14741/ijcet/v.10.6.1.

E.M. Dogo, O.J. Afolabi, and B. Twala, “On the relative impact of optimizers on convolutional neural networks with varying depth and width for image classification,” Appl. Sci., vol. 12, no. 23, pp. 1–36, Dec. 2022, doi: 10.3390/app122311976.

P. Mooney, “Blood cell images.” Kaggle. Access date: 13-Mar-2024. [Online]. Available: https://www.kaggle.com/datasets/paultimothymooney/blood-cells

K.G. Kannan et al. “Classification of WBC cell classification using fully connected convolution neural network,” J. Phys., Conf. Ser., vol. 2466, No. 1, Apr. 2023, Art. No 012033, doi: 10.1088/1742-6596/2466/1/012033.

E. Rivas-Posada, M.I. Chacon-Murguia, J.A. Ramirez-Quintana, and C. Arzate-Quintana, “Classification of leukocytes using meta-learning and color constancy methods,” J. Ilmiah Tek. Elekt. Komput. Inform., vol. 8, no. 4, pp. 486–499, Dec. 2022, doi: 10.26555/jiteki.v8i4.25192.

C. Panjaitan, Y. Panjaitan, D. Sitanggang, and S.W. Tarigan, “Image processing for detection of dengue virus,” J. Sis. Inf. Ilmu Komput. Prima, vol. 7, no. 2, pp. 26–34, Feb. 2024, doi: 10.34012/jurnalsisteminformasidanilmukomputer.v7i2.4799.

A. Ekiz, K. Kaplan, and H.M. Ertunç, “Classification of white blood cells using CNN and Con-SVM,” in 2021 29th Signal Process. Commun. Appl. Conf. (SIU), 2021, pp. 1–4, doi: 10.1109/SIU53274.2021.9477962.

Z.E. Fitri, L.N.Y. Syahputri, and A.M.N. Imron, “Classification of white blood cell abnormalities for early detection of myeloproliferative neoplasms syndrome based on k-nearest neighbor,” Sci. J. Inform., vol. 7, no. 1, pp. 136–142, May 2020, doi: 10.15294/sji.v7i1.24372.

M. Jawahar, L.J. Anbarasi, S. Narayanan, and A.H. Gandomi, “An attention-based deep learning for acute lymphoblastic leukemia classification,” Sci. Rep., vol. 14, pp. 1–20, Jul. 2024, doi: 10.1038/s41598-024-67826-9.

Y. Shao et al., “An improved BGE-Adam optimization algorithm based on entropy weighting and adaptive gradient strategy,” Symmetry, vol. 16, no. 5, pp. 1–16, May 2024, doi: 10.3390/sym16050623.

R. Elshamy, O. Abu-Elnasr, M. Elhoseny, and S. Elmougy, “Improving the efficiency of RMSProp optimizer by utilizing Nestrove in deep learning,” Sci. Rep., vol. 13, pp. 1–16, May 2023, doi: 10.1038/s41598-023-35663-x.

Y. Tian, Y. Zhang, and H. Zhang, “Recent advances in stochastic gradient descent in deep learning,” Mathematics, vol. 11, no. 3, pp. 1–23, Feb. 2023, doi: 10.3390/math11030682.

L. Alzubaidi et al., “Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions,” J. Big Data, vol. 8, pp. 1–74, Mar. 2021, doi: 10.1186/s40537-021-00444-8.

A. Dosovitskiy et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” 9th Int. Conf. Learn. Represent., 2021, pp. 1–22.

I. Adjabi, A. Ouahabi, A. Benzaoui, and A. Taleb-Ahmed, “Past, present, and future of face recognition: A review,” Electronics, vol. 9, no. 8, pp. 1–52, Aug. 2020, doi: 10.3390/electronics9081188.

W. Di, A. Bhardwaj, and J. Wei, Deep Learning Essentials. Birmingham, United Kingdom: Packt, 2018.

Y. Wang, Z. Xiao, and G. Cao, “A convolutional neural network method based on Adam optimizer with power-exponential learning rate for bearing fault diagnosis,” J. Vibroengineering, vol. 24, no. 4, pp. 666–678, Jun. 2022, doi: 10.21595/jve.2022.22271.

Q. Tong, G. Liang, and J. Bi, “Calibrating the adaptive learning rate to improve convergence of ADAM,” Neurocomputing, vol. 481, pp. 333–356, Apr. 2022, doi: 10.1016/j.neucom.2022.01.014.

S. Nagendram et al., “Stochastic gradient descent optimisation for convolutional neural network for medical image segmentation,” Open Life Sci., vol. 18, no. 1, pp. 1–15, Aug. 2023, doi: 10.1515/biol-2022-0665.

Z. Wang, E. Wang, and Y. Zhu, “Image segmentation evaluation: A survey of methods,” Artif. Intell. Rev., vol. 53, pp. 5637–5674, Dec. 2020, doi: 10.1007/s10462-020-09830-9.

Y. Said, A.A. Alsheikhy, T. Shawly, and H. Lahza, “Medical images segmentation for lung cancer diagnosis based on deep learning architectures,” Diagnostics, vol. 13, no. 3, pp. 1–15, Feb. 2023, doi: 10.3390/diagnostics13030546.

V. Chaudhary, P.K. Buttar, and M.K. Sachan, “Satellite imagery analysis for road segmentation using U-Net architecture,” J. Supercomput., vol. 78, pp. 12710–12725, Jul. 2022, doi: 10.1007/s11227-022-04379-6.

I. Markoulidakis et al., “Multiclass confusion matrix reduction method and its application on net promoter score classification problem,” Technologies, vol. 9, no. 4, pp. 1–22, Dec. 2021, doi: 10.3390/technologies9040081.

Ş.K. Çorbacıoğlu and G. Aksel, “Receiver operating characteristic curve analysis in diagnostic accuracy studies: A guide to interpreting the area under the curve value,” Turkish J. Emerg. Med., vol. 23, no. 4, pp. 195–198, Oct. –Dec. 2023, doi: 10.4103/tjem.tjem_182_23.

V.R. Joseph, “Optimal ratio for data splitting,” Stat. Anal. Data Min., ASA Data Sci. J., vol. 15, no. 4, pp. 531–538, Aug. 2022, doi: 10.1002/sam.11583.

T. Wongvorachan, S. He, and O. Bulut, “A comparison of undersampling, oversampling, and SMOTE methods for dealing with imbalanced classification in educational data mining,” Information, vol. 14, no. 1, pp. 1–15, Jan. 2023, doi: 10.3390/info14010054.

C. Yang et al., “Impact of random oversampling and random undersampling on the performance of prediction models developed using observational health data,” J. Big Data, vol. 11, pp. 1–18, Jan. 2024, doi: 10.1186/s40537-023-00857-7.

A. Semma et al., “Writer identification: The effect of image resizing on CNN performance,” in 6th Int. Conf. Smart City Appl., 2021, pp. 501–507, doi: 10.5194/isprs-archives-XLVI-4-W5-2021-501-2021.

T.A.A.H. Kusuma, K. Usman, and S. Saidah, “People counting for public transportations using you only look once method,” J. Tek. Inform., vol. 2, no. 1, pp. 57–66, Jun. 2021, doi: 10.20884/1.jutif.2021.2.2.77.

T.S. Nabila and A. Salam, “Classification of brain tumors by using a hybrid CNN-SVM model,” J. Appl. Inform. Comput., vol. 8, no. 2, pp. 241–247, Dec. 2024, doi: 10.30871/jaic.v8i2.8277.

Published
2025-03-06
How to Cite
Dede Kurniadi, Rifky Muhammad Shidiq, & Asri Mulyani. (2025). Comparison of Optimizer Use in White Blood Cell Classification Employing CNN. Jurnal Nasional Teknik Elektro Dan Teknologi Informasi, 14(1), 77-86. https://doi.org/10.22146/jnteti.v14i1.17162
Section
Articles