Masked Face Recognition and Temperature Monitoring Systems for Airplane Passenger Using Sensor Fusion
Transportation is currently an unavoidable necessity. However, the COVID-19 pandemic has impacted all lines of industry, including the Indonesian aviation transportation industry. Technology is one of the solutions to deal with these problems. The monitoring system of masked face recognition and body temperature detection for the check-in process of passengers at the airport is aimed to be developed in this research. The contribution of this research is that the system can distinguish the type of face mask used. Therefore, this monitoring system classified only medical masks and N95/KN95 respirator masks as ‘Good Masked’. IP camera and thermal camera are used to identify a masked face and body temperature, respectively. The sensor fusion method was used for decision-making on passengers whether they can be departed or not. The decision was taken based on the measured body temperature, the use of standardized face masks, and the face recognition of the airport passengers. Convolutional neural network (CNN) method was used for face and face mask recognition. The CNN model training was conducted four times according to the four proposed scenarios. The CNN model that has been trained can distinguish a masked face and a face without a mask. The best results were obtained in the fourth scenario with the comparison of the training dataset to the testing dataset was 9:1 and the epoch was 500 times. The basic deep learning model used for face detection was the single shot multibox detector (SSD) using the ResNet-10 architecture. Meanwhile, the CNN method with the MobileNetV2 architecture was used to detect the use of masks. The accuracy of the CNN model for face recognition and mask recognition was 100%. All check-in monitoring and verification process data were displayed on the web application which was built on the localhost.
“Kriteria dan Persyaratan Perjalanan Orang dalam Masa Adaptasi Kebiasaan Baru Menuju Masyarakat Produktif dan Aman Corona Virus Disease 2019 (COVID-19),” Surat Edaran Gugus Tugas Percepatan Penanganan COVID-19 Nomor 7, 2020.
(2020) “Coronavirus Disease (COVID-19): When and How to Use Masks,” [Online], https://www.who.int/emergencies/diseases/novel-coronavirus-2019/advice-for-public/when-and-how-to-use-masks, access date: 10-Feb-2021.
A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep Learning for Computer Vision: A Brief Review,” Comput. Intell., Neurosci., Vol. 2018, pp. 1-13, 2018.
S. Sethi, M. Kathuria, and T. Kaushik, “Face Mask Detection Using Deep Learning: An Approach to Reduce Risk of Coronavirus Spread,” J. Biomed. Inform., Vol. 120, pp. 1-12, 2021.
X. Jiang, T. Gao, Z. Zhu, and Y. Zhao, “Real-Time Face Mask Detection Method Based on YOLOv3,” Electron., Vol. 10, No. 7, pp. 1-17, Apr. 2021.
M. Jiang, X. Fan, and H. Yan, “RetinaFaceMask: A Single Stage Face Mask Detector for Assisting Control of the COVID-19 Pandemic,” 2020, arXiv:2005.03950.
W. Hariri, “Eficient Masked Face Recognition Method during the COVID-19 Pandemic,” 2021, arXiv:2105.03026.
H. Li, et al., “Multisensor Data Fusion for Human Activities Classification and Fall Detection,” 2017 IEEE SENSORS, 2017, pp. 1-3.
R. Gravina, P. Alinia, H. Ghasemzadeh, and G. Fortino, “Multi-Sensor Fusion in Body Sensor Networks: State-of-the-Art and Research Challenges,” Inf. Fusion, Vol. 35, pp. 68-80, 2017.
(2020) “Coronavirus Disease (COVID-19): Masks,” [Online], https://www.who.int/news-room/q-a-detail/coronavirus-disease-covid-19-masks, access date: 6-Feb-2021.
S. McHugh and K. Yarmey, Near Field Communication Recent Developments and Library Implications. San Rafael, USA: Morgan & Claypool Publishers, 2014.
N. Álvarez-Díaz, P. Caballero-Gil, and M. Burmeste, “A Luggage Control System Based on NFC and Homomorphic Cryptography,” Mobile Inf. Syst., Vol. 2017, pp. 1-11, 2017.
H. Komsta, E. Brumercikova, and B. Bukova, “Application of NFC Technology in Passenger Rail Transport,” Transp. Probl., Vol. 11, No. 3, pp. 43-53, 2016.
C.C. Aggarwal, Neural Networks and Deep Learning. Cham, Switzerland: Springer, 2018.
H. Yanagisawa, T. Yamashita, and H. Watanabe, “A Study on Object Detection Method from Manga Images Using CNN,” Int. Workshop Adv. Image Technol. (IWAIT), 2018, pp. 1-4.
Z. Ouyang, J. Niu, Y. Liu, and M. Guizani, “Deep CNN-Based Real-Time Traffic Light Detector for Self-Driving Vehicles,” IEEE Trans. Mob. Comput., Vol. 19, No. 2, pp. 300-313, 2020.
A.R. Syafeeza, M. Khalil-Hani, S.S. Liew, and R. Bakhteri, “Convolutional Neural Network for Face Recognition with Pose and Illumination,” Int. J. Eng., Technol. (IJET), Vol. 6, No. 1, pp. 44-57, 2014.
M.G. Bechtel, E. Mcellhiney, M. Kim, and H. Yun, “DeepPicar: A Low-Cost Deep Neural Network-Based Autonomous Car,” IEEE 24th Int. Conf. Embedded, Real-Time Comput. Syst., Appl. (RTCSA), 2018, pp. 11-21.
D. Galar and U. Kumar, eMaintenance - Essential Electronic Tools for Efficiency, J. Laurence, Ed., Cambridge, USA: Academic Press, 2017.
M.-D. Yang, T.-C. Su, and H.-Y. Lin, “Fusion of Infrared Thermal Image and Visible Image for 3D Thermal Model Reconstruction Using Smartphone Sensors,” Sensors, Vol. 18, No. 7, pp. 1-19, 2018.
A. Hashmi, A.N. Kalashnikov, “Sensor Data Fusion for Responsive High Resolution Ultrasonic Temperature Measurement Using Piezoelectric Transducers,” Ultrasonics, Vol. 99, pp. 1-8, 2019.