From Reviews to Results: Leveraging Amazon Feedback for Product Evolution
DOI:
https://doi.org/10.32628/IJSRSET2411458Keywords:
BERT, sentiment analysis, Amazon reviews, product development, natural language processing, machine learningAbstract
This study explores the efficacy of Bidirectional Encoder Representations from Transformers (BERT) in accurately classifying customer sentiment within Amazon product reviews. To determine if BERT surpasses traditional sentiment analysis methods (Logistic Regression, TF-IDF, Random Forest, Naive Bayes, SVM) in understanding and classifying customer opinions expressed in Amazon reviews. By leveraging BERT's contextual understanding, the research aims to overcome the limitations of traditional methods. The performance of each model will be evaluated using metrics such as accuracy, precision, recall, and F1-score. The expected outcomes are advancements in sentiment analysis techniques, valuable insights for businesses to leverage customer feedback for improved product development and encourage wider adoption of sentiment analysis across various industries.
Downloads
References
Du, C., Sun, H., Wang, J., Qi, Q., and Liao, J. (2020) ‘Adversarial and domain-aware BERT for cross-domain sentiment analysis’, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, July. Association for Computational Linguistics, pp. 4019–4028. DOI: 10.18653/v1/2020.acl-main.370. DOI: https://doi.org/10.18653/v1/2020.acl-main.370
Pota, M., Ventura, M., Catelli, R., and Esposito, M. (2021) ‘An effective BERT-based pipeline for Twitter sentiment analysis: A case study in Italian’, Sensors, 21(1), p. 133. DOI: 10.3390/s21010133. DOI: https://doi.org/10.3390/s21010133
Araci, D. (2021) ‘FinBERT: A pre-trained financial language representation model for financial sentiment analysis’, Financial AI and NLP Workshop Papers at IJCAI-2021, August, Montreal, Canada. Available at: ArXiv:2104.04906.
Sun, C., Huang, L., and Qiu, X. (2021) ‘Improving BERT performance for aspect-based sentiment analysis’, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), June, Online. Association for Computational Linguistics, pp. 3673–3682. DOI: 10.18653/v1/2021.naacl-main.290. DOI: https://doi.org/10.18653/v1/2021.naacl-main.290
Jain, V., and Raj, A. (2022) ‘Sentiment analysis using BERT variants for customer feedback’, Proceedings of the 6th International Conference on Machine Learning and Data Engineering (iCMLDE), December, Dubai, UAE. IEEE, pp. 47–52. DOI: 10.1109/ICMLDE55049.2022.00010.
Kumar, R., and Rani, S. (2022) ‘BERT for imbalanced Twitter sentiment analysis’, Proceedings of the 2022 IEEE/WIC/ACM International Conference on Web Intelligence (WI), December, Atlanta, GA. IEEE, pp. 312–319. DOI: 10.1109/WI-2022.00036.
Lee, J., and Kim, S. (2023) ‘Hierarchical text clustering for sentiment analysis using BERT’, Information Processing & Management, 60(1), p. 103252. DOI: 10.1016/j.ipm.2022.103252. DOI: https://doi.org/10.1016/j.ipm.2022.103252
Zhang, Y., and Liu, B. (2023) ‘Sentiment analysis with multimodal BERT for product reviews’, Multimedia Tools and Applications, 82(2), pp. 307–323. DOI: 10.1007/s11042-023-13580-4.
Wilson, T., and Spencer, A. (2024) ‘BERT for Twitter sentiment analysis: Achieving high accuracy and balanced performance’, Proceedings of the 2024 AAAI Conference on Artificial Intelligence (AAAI), February, New York, NY. AAAI Press, pp. 5678–5686. DOI: 10.1609/aaai.v38i3.22715.
Ahmed, M., and Lee, K. (2024) ‘Guided sentiment analysis using BERT for social media’, Proceedings of the 2024 International Conference on SocialMedia and Computational Social Science (ICSMCSS), March, Singapore. ACM, pp. 102–110. DOI: 10.1145/3472535.3472543.
Jha, S. (2021) Myntra reviews on women dresses - comprehensive. Kaggle. Available at: https://www.kaggle.com/datasets/surajjha101/myntra-reviews-on-women-dresses-comprehensive [Accessed 18 October 2024].
Javatpoint. (n.d.) Data preprocessing in machine learning. Available at: https://www.javatpoint.com/data-preprocessing-machine-learning [Accessed 01 November 2024].
TensorFlow. (n.d.) Fine-tune BERT for NLP tasks. Available at: https://www.tensorflow.org/tfmodels/nlp/fine_tune_bert [Accessed 10 November 2024].
Downloads
Published
Issue
Section
License
Copyright (c) 2024 International Journal of Scientific Research in Science, Engineering and Technology
This work is licensed under a Creative Commons Attribution 4.0 International License.