Optimization of Parallel Neural Network Layer Configuration in English Text Sentiment Analysis

  • Agung Nugroho Universitas Amikom Yogyakarta
  • Arief Setyanto Informatics, Faculty of Computer Science, Universitas AMIKOM Yogyakarta, Sleman, Yogyakarta, Indonesia
Keywords: Sentiment Analysis, Parallel Layer, Bi-LSTM, GRU, Keras

Abstract

Accuracy in analyst sentiment classification is very important so that the trained model can be implemented well to make business decisions. Researchers proposed a method for configuring neural network models arranged in parallel to improve classification accuracy. The results of the first stage, a bidirectional long short-term memory (Bi-LSTM) algorithm with Keras embedding with a sequential layer configuration, produced the best accuracy of 80.20%. The results of this first stage served as the baseline to be used as a reference for the combination in the second stage of the experiment. In the second stage of the experiment, a combination of the Bi-LSTM algorithm with other algorithms was carried out in parallel, such as gated recurrent unit (GRU), recurrent neural network (RNN), and Simple RNN with Keras embedding. It was found that the combination of three parallel layers of GRU-BiLSTM-RNN with Keras Embedding produced the highest accuracy for sentiment analysis of three classes, with a value of 88%. A statistical test of the t-test method was carried out with a critical p-value of 0.05 to prove the accuracy that has been produced between the sequential and the parallel configuration. The results of the t-test between the sequential configuration and the parallel configuration obtained a p-value of 0.5e-9 which is much smaller than the critical p-value of 0.05 so that in statistical testing the average accuracy produced from the two configurations is significantly different.

References

Y.C. Lee et al., “An empirical research on customer satisfaction study: A consideration of different levels of performance,” SpringerPlus, vol. 5, Sep. 2016, doi: 10.1186/s40064-016-3208-z.

X. Zheng, M. Lee, and C.M.K. Cheung, “Examining e-loyalty towards online shopping platforms: The role of coupon proneness and value consciousness,” Internet Research, vol. 27, no. 3, pp. 709–726, Jun. 2017, doi: 10.1108/IntR-01-2016-0002.

J.Ł. Wilk-Jakubowski, “Analysis of broadband informatics services provided via the Internet, and the number of internet users on a global scale,” unpublished.

S. Zhang, Z. Wei, Y. Wang, and T. Liao, “Sentiment analysis of Chinese micro-blog text based on extended sentiment dictionary,” Future Gener. Comput. Syst., vol. 81, pp. 395–403, Apr. 2018, doi: 10.1016/j.future.2017.09.048.

T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” 2013, arXiv:1301.3781.

J. Pennington, R. Socher, and C.D. Manning, “GloVe: Global vectors for word representation,” in Proc. 2014 Conf. Empir. Methods Nat. Lang. Process. (EMNLP), 2014, pp. 1532–1543, doi: 10.3115/v1/d14-1162.

P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov, “Enriching word vectors with subword information,” 2016, arXiv:1607.04606.

N. Ketkar, “Introduction to Keras,” in Deep Learning with Python, Berkeley, CA, USA: Apress, 2017, pp. 97–111.

J. Devlin et al., “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” [Online]. Available: https://github.com/tensorflow/tensor2tensor

Y. Liu et al., “RoBERTa: A Robustly optimized BERT pretraining approach,” 2019, arXiv:1907.11692.

S. Haque, Z. Eberhart, A. Bansal, and C. McMillan, “Semantic similarity metrics for evaluating source code summarization,” in ICPC '22, Proc. 30th IEEE/ACM Int. Conf. Program Compr., 2022, pp. 36–47, doi: 10.1145/3524610.352790.

A. Sherstinsky, “Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network,” Phys. D, Nonlinear Phenom., vol. 404, pp. 1–28, Mar. 2020, doi: 10.1016/j.physd.2019.132306.

S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, Nov. 1997, doi: 10.1162/neco.1997.9.8.1735.

S. Ghosh, A. Ekbal, and P. Bhattacharyya, “Chapter 2 - Natural language processing and sentiment analysis: perspectives from computational intelligence,” in Computational Intelligence Applications for Text and Sentiment Data Analysis, D. Das, A. K. Kolya, A. Basu, and S. Sarkar, Eds., Cambridge, CA, USA: Academic Press, 2023, pp. 17–47.

J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling,” 2014, arXiv:1412.3555.

Y.A. Singgalen, “Sentiment analysis and trend mapping of hotel reviews using LSTM and GRU,” J. Inf. Syst. Inform., vol. 6, no. 4, pp. 2814–2836, Dec. 2024, doi: 10.51519/journalisi.v6i4.926.

U.B. Mahadevaswamy and P. Swathi, “Sentiment analysis using bidirectional LSTM network,” in Procedia Comput. Sci., 2022, pp. 45–56, doi: 10.1016/j.procs.2022.12.400.

Aakash, S. Gupta, and A. Noliya, “URL-based sentiment analysis of product reviews using LSTM and GRU,” in Procedia Comput. Sci., 2024, pp. 1814–1823, doi: 10.1016/j.procs.2024.04.172.

S. Kumawat, I. Yadav, N. Pahal, and D. Goel, “Sentiment analysis using language models: A study,” in 2021 11th Int. Conf. Cloud Comput. Data Sci. Eng. (Conflu.), 2021, pp. 984–988, doi: 10.1109/Confluence51648.2021.9377043.

R. Wadawadagi and V. Pagi, “Sentiment analysis with deep neural networks: Comparative study and performance assessment,” Artif. Intell. Rev., vol. 53, pp. 6155–6195, Dec. 2020, doi: 10.1007/s10462-020-09845-2.

J. Kazmaier and J.H. van Vuuren, “The power of ensemble learning in sentiment analysis,” Expert Syst. Appl., vol. 187, pp. 1–16, Jan, 2022, doi: 10.1016/j.eswa.2021.115819.

A.F. Jadama, B. Jobarteh, M.M. Islam, and M.K. Toray, “Ensemble learning: Methods, techniques, application,” unpublished.

M. Vadivukarassi, N. Puviarasan, and P. Aruna, “An exploration of airline sentimental tweets with different classification model,” Int. J. Res. Eng. Appl. Manag. (IJREAM), vol. 4, no. 2, pp. 72–77, May 2018, doi: 10.18231/2454-9150.2018.0124.

A.I. Saad, “Opinion mining on US airline Twitter data using machine learning techniques,” in 2020 16th Int. Comput. Eng. Conf. (ICENCO), 2020, pp. 59–63, doi: 10.1109/ICENCO49778.2020.9357390.

F. Rustam et al., “Tweets classification on the base of sentiments for US airline companies,” Entropy, vol. 21, no. 11, pp. 1–22, Nov. 2019, doi: 10.3390/e21111078.

A. Rane and A. Kumar, “Sentiment classification system of Twitter data for US airline service analysis,” in 2018 IEEE 42nd Annu. Comput. Softw. Appl. Conf. (COMPSAC), 2018, pp. 769–773, doi: 10.1109/COMPSAC.2018.00114.

A. Alshamsi, R. Bayari, and S. Salloum, “Sentiment analysis in English texts,” Adv. Sci. Technol. Eng. Syst. J., vol. 5, no. 6, pp. 1683–1689, Dec. 2020, doi: 10.25046/AJ0506200.

A. Patel, P. Oza, and S. Agrawal, “Sentiment analysis of customer feedback and reviews for airline services using language representation model,” Procedia Comput. Sci., vol. 218, pp. 2459–2467, doi: 10.1016/j.procs.2023.01.221.

W. Guoyin and S. Hongbao, “Parallel neural network architectures,” in Proc. ICNN'95 - Int. Conf. Neural Netw., 1995, pp. 1234–1239, doi: 10.1109/ICNN.1995.487331.

Published
2025-11-28
How to Cite
Nugroho, A., & Arief Setyanto. (2025). Optimization of Parallel Neural Network Layer Configuration in English Text Sentiment Analysis. Jurnal Nasional Teknik Elektro Dan Teknologi Informasi, 14(4), 245-253. https://doi.org/10.22146/jnteti.v14i4.21069
Section
Articles