Search papers, labs, and topics across Lattice.
This paper investigates the effectiveness of transformer-based models, specifically BERT and RoBERTa, for aspect-based sentiment analysis (ABSA) on product reviews, comparing their performance against LSTM and CNN baselines. The study fine-tunes these models on publicly available, annotated product review datasets and evaluates them using accuracy, precision, recall, and F1-score. Results demonstrate that RoBERTa achieves superior performance with an accuracy of 0.93 and an F1-score of 0.92, significantly outperforming LSTM and CNN, highlighting the ability of transformers to capture contextual dependencies for ABSA.
RoBERTa blows away LSTM and CNN in aspect-based sentiment analysis of product reviews, achieving 93% accuracy and highlighting transformers' edge in capturing contextual dependencies.
AbstractAspect-Based Sentiment Analysis (ABSA) is essential for extracting detailed sentiment polarity regarding specific aspects in product reviews, providing deeper insights into customer opinions on various product attributes. Unlike document-level sentiment analysis, ABSA allows a more granular understanding, crucial for e-commerce analytics and decision-making systems. This study investigates the effectiveness of transformer-based models, such as BERT and RoBERTa, in performing ABSA for product review mining.Purpose:This research aims to explore the application of transformer-based models for aspect-based sentiment analysis, comparing their performance with traditional deep learning models (LSTM and CNN) in the context of mining product reviews. The study evaluates how transformer-based models can more effectively capture sentiment polarity at the aspect level.Methods/Study design/approach:The study uses publicly available product review datasets from large-scale e-commerce platforms, where each review is annotated with aspect terms and sentiment polarities (positive, negative, neutral). The datasets were split into training, validation, and test sets in an 80:10:10 ratio. The models—BERT, RoBERTa, LSTM, and CNN—were fine-tuned on the ABSA task. Performance was evaluated using standard metrics: accuracy, precision, recall, and F1-score.Result/Findings:The results show that transformer-based models, especially RoBERTa, significantly outperform conventional deep learning baselines like LSTM and CNN. RoBERTa achieved the best performance with an accuracy of 0.93 and an F1-score of 0.92, while BERT achieved an accuracy of 0.91 and an F1-score of 0.90. In contrast, LSTM and CNN achieved F1-scores of 0.82 and 0.84, respectively. The transformer models excel in capturing contextual dependencies and associating sentiment polarity with the correct aspects, particularly in complex and multi-aspect sentences. This highlights the superior ability of transformers to handle long-range dependencies and complex sentence structures compared to LSTM and CNN.The findings confirm that transformer-based models are highly effective for aspect-based sentiment analysis, providing a more reliable approach for product review mining. Future research should address the efficiency and interpretability of these models, particularly for large-scale deployment in real-world e-commerce applications.