Improving the Adaptive Monte Carlo Localization Accuracy Using a Convolutional Neural Network
This paper explains the increase in localization system accuracy of the adaptive Monte Carlo localization (AMCL) in robots utilizing a convolutional neural network (CNN). The localization system in robots is defined as the position recognition process of robots within their working environment. This system is essential as it allows robots to navigate and map efficiently and accurately. Without appropriate localization, robots cannot operate effectively and can encounter troubles such as losing direction or bumping into objects. AMCL is a popular localization system and is widely applied in robots. This method utilizes the changes in the robots’ position and light detection and ranging (LiDAR) sensor reading as input. Reading of robot position changes is susceptible to error due to slips or wheel deformations. The inaccuracy of reading the robots’ position change results in the inaccuracy of the robots’ position prediction by AMCL, so improvements are required. Novelty in this paper includes providing compensation values from AMCL results for the error to be small. These compensation values were obtained from the CNN training results; hence, the proposed method was dubbed AMCL+CNN. Inputs given to the CNN were the changes in wheel odometry values and distance reading by the LiDAR sensor. CNN outputs were compared to the target data in the form of the robots’ actual position from observation results. Network training was conducted for as many as 200 epochs to achieve the lowest validation loss. Testing was done on a robot installed with a robot operating system (ROS). Training and testing datasets were obtained from rosbag data when the robot traversed the testing area. In straight and turn scenarios, obtained AMCL+CNN algorithms had fewer errors than the regular AMCL and Monte Carlo localization (MCL). Results obtained are also superior in terms of positional error metrics when compared to several other comparison methods.
F. Rovira-Más, V. Saiz-Rubio, and A. Cuenca-Cuenca, “Augmented Perception for Agricultural Robots Navigation,” IEEE Sens. J., Vol. 21, No. 10, pp. 11712–11727, May 2021, doi: 10.1109/JSEN.2020.3016081.
S. Karmore et al., “IoT-Based Humanoid Software for Identification and Diagnosis of Covid-19 Suspects,” IEEE Sens. J., Vol. 22, No. 18, pp. 17490–17496, Sep. 2022, doi: 10.1109/JSEN.2020.3030905.
P.-Y. Yang, T.-H. Chang, Y.-H. Chang, and B.-F. Wu, “Intelligent Mobile Robot Controller Design for Hotel Room Service with Deep Learning Arm-Based Elevator Manipulator,” 2018 Int. Conf. Syst. Sci., Eng. (ICSSE), 2018, pp. 1–6, doi: 10.1109/ICSSE.2018.8520030.
B.P.E.A. Vasquez, R. Gonzalez, F. Matia, and P. De La Puente, “Sensor Fusion for Tour-Guide Robot Localization,” IEEE Access, Vol. 6, pp. 78947–78964, Dec. 2018, doi: 10.1109/ACCESS.2018.2885648.
Y. Peng et al., “Research Progress of Urban Dual-Arm Humanoid Grape Harvesting Robot,” 2021 IEEE 11th Annu. Int. Conf. CYBER Technol. Automat. Control, Intell. Syst. (CYBER), 2021, pp. 879–885, doi: 10.1109/CYBER53097.2021.9588266.
D. Shi, H. Mi, E.G. Collins, and J. Wu, “An Indoor Low-Cost and High-Accuracy Localization Approach for AGVs,” IEEE Access, Vol. 8, pp. 50085–50090, Mar. 2020, doi: 10.1109/ACCESS.2020.2980364.
R.A. Firmansyah, Y.A. Prabowo, and T. Suheta, “Thermal Imaging-Based Body Temperature and Respiratory Frequency Measurement System for Security Robot,” Przegląd Elektrotechniczny, Vol. 98, No. 6, pp. 126-130, 2022, doi: 10.15199/48.2022.06.23.
R.M. Ñope-Giraldo et al., “Mechatronic Systems Design of ROHNI-1: Hybrid Cyber-Human Medical Robot for COVID-19 Health Surveillance at Wholesale-Supermarket Entrances,” 2021 Glob. Med. Eng. Phys. Exch./Pan Amer. Health Care Exch. (GMEPE/PAHCE), 2021, pp. 1–7, doi: 10.1109/GMEPE/PAHCE50215.2021.9434874.
A. Carlucci, M. Morisco, and F. Dell’Olio, “Human Vital Sign Detection by a Microcontroller-Based Device Integrated into a Social Humanoid Robot,” 2022 IEEE Int. Symp. Med. Meas., Appl. (MeMeA), 2022, pp. 1–6, doi: 10.1109/MeMeA54994.2022.9856407.
L. Garrote, T. Barros, R. Pereira, and U.J. Nunes, “Absolute Indoor Positioning-aided Laser-based Particle Filter Localization with a Refinement Stage,” IECON 2019 - 45th Annu. Conf. IEEE Ind. Electron. Soc., 2019, pp. 597–603, doi: 10.1109/IECON.2019.8927475.
M.-A. Chung and C.-W. Lin, “An Improved Localization of Mobile Robotic System Based on AMCL Algorithm,” IEEE Sens. J., Vol. 22, No. 1, pp. 900–908, Jan. 2022, doi: 10.1109/JSEN.2021.3126605.
S.J. Dignadice et al., “Application of Simultaneous Localization and Mapping in the Development of an Autonomous Robot,” 2022 8th Int. Conf. Control Automat., Robot. (ICCAR), 2022, pp. 77–80, doi: 10.1109/ICCAR55106.2022.9782658.
A. Ehambram, L. Jaulin, and B. Wagner, “Hybrid Interval-Probabilistic Localization in Building Maps,” IEEE Robot., Autom. Lett., Vol. 7, No. 3, pp. 7059–7066, Jul. 2022, doi: 10.1109/LRA.2022.3181371.
K. Żywanowski, A. Banaszczyk, M.R. Nowicki, and J. Komorowski, “MinkLoc3D-SI: 3D LiDAR Place Recognition with Sparse Convolutions, Spherical Coordinates, and Intensity,” IEEE Robot., Autom. Lett., Vol. 7, No. 2, pp. 1079–1086, Apr. 2022, doi: 10.1109/LRA.2021.3136863.
W. Shen, Y. Jia, J. Zhu, and X. Qian, “A Fast Monocular Visual–Inertial Odometry Using Point and Line Features,” 2022 7th Int. Conf. Signal, Image Process. (ICSIP), 2022, pp. 591–595, doi: 10.1109/ICSIP55141.2022.9886829.
J. Li and A. Hamdulla, “A Research of Visual-Inertial Simultaneous Localization and Mapping,” 2022 3rd Int. Conf. Pattern Recognit., Mach. Learn. (PRML), 2022, pp. 143–150, doi: 10.1109/PRML56267.2022.9882205.
J. Yuan, S. Zhu, K. Tang, and Q. Sun, “ORB-TEDM: An RGB-D SLAM Approach Fusing ORB Triangulation Estimates and Depth Measurements,” IEEE Trans. Instrum., Meas., Vol. 71, pp. 1–15, 2022, doi: 10.1109/TIM.2022.3154800.
J. Liu, X. Li, Y. Liu, and H. Chen, “RGB-D Inertial Odometry for a Resource-Restricted Robot in Dynamic Environments,” IEEE Robot., Automat. Lett., Vol. 7, No. 4, pp. 9573–9580, Oct. 2022, doi: 10.1109/LRA.2022.3191193.
R. Long et al., “RGB-D SLAM in Indoor Planar Environments with Multiple Large Dynamic Objects,” IEEE Robot., Automat. Lett., Vol. 7, No. 3, pp. 8209–8216, Jul. 2022, doi: 10.1109/LRA.2022.3186091.
Y. Chen et al., “Submap-Based Indoor Navigation System for the Fetch Robot,” IEEE Access, Vol. 8, pp. 81479–81491, Apr. 2020, doi: 10.1109/ACCESS.2020.2991465.
K. Tian and K. Mirza, “Sensor Fusion for Octagon – an Indoor and Outdoor Autonomous Mobile Robot,” 2022 IEEE Int. Syst. Conf. (SysCon), 2022, pp. 1–5, doi: 10.1109/SysCon53536.2022.9773827.
C. Li, S. Wang, Y. Zhuang, and F. Yan, “Deep Sensor Fusion Between 2D Laser Scanner and IMU for Mobile Robot Localization,” IEEE Sens. J., Vol. 21, No. 6, pp. 8501–8509, Mar. 2021, doi: 10.1109/JSEN.2019.2910826.
N. Zimmerman et al., “Robust Onboard Localization in Changing Environments Exploiting Text Spotting,” 2022 IEEE/RSJ Int. Conf. Intell. Robots, Syst. (IROS), 2022, pp. 917–924, doi: 10.1109/IROS47612.2022.9981049.
B. Kaleci, K. Turgut, and H. Dutagaci, “2DLaserNet: A Deep Learning Architecture on 2D Laser Scans for Semantic Classification of Mobile Robot Locations,” Eng. Sci., Technol. Int. J., Vol. 28, pp. 1-13, Apr. 2022, doi: 10.1016/j.jestch.2021.06.007.
G. Spampinato, A. Bruna, I. Guarneri, and D. Giacalone, “Deep Learning Localization with 2D Range Scanner,” 2021 7th Int. Conf. Automat. Robot., Appl. (ICARA), 2021, pp. 206–210, doi: 10.1109/ICARA51699.2021.9376424.
H. Kuang et al., “IR-MCL: Implicit Representation-Based Online Global Localization,” IEEE Robot., Automat. Lett., Vol. 8, No. 3, pp. 1627–1634, Mar. 2023, doi: 10.1109/LRA.2023.3239318.
© Jurnal Nasional Teknik Elektro dan Teknologi Informasi, under the terms of the Creative Commons Attribution-ShareAlike 4.0 International License.