Abstract
This study addresses a lexical ambiguity issue in Sesotho sa Leboa that arises from terms with various meanings, often known as homonyms or polysemous words. When compared to, for instance, European languages, this lexical ambiguity in Sesotho sa Leboa causes computational semantic problems in NLP when trying to identify the lexicon of a language. In other words, it is challenging to determine the proper lexical category and sense of words due to this ambiguity problem. In order to address the issue of polysemy in the Sesotho sa Leboa language, this study set out to create a word sense discrimination (WSD) scheme using a corpus-based hybrid transformer-based architecture and deep learning models. Additionally, the performance of baseline and improved machine learning models for a sequence-based natural language processing (NLP) task was assessed and compared. The baseline models included RNN-LSTM, BiGRU, LSTMLM, DeBERTa, and DistilBERT, with accuracies of 61%, 79%, 74%, 70%, and 64%, respectively. Among these, BiGRU emerged as the strongest performer, leveraging its bidirectional architecture to achieve the highest baseline accuracy. Transformer-based models, such as DeBERTa and DistilBERT, demonstrated moderate performance, with the latter prioritizing efficiency at the cost of accuracy. The enhanced results explored optimization techniques and hybrid model architectures to improve performance. BiGRU, optimized with ADAM, achieved an accuracy of 84%, while BiGRU with attention mechanisms further improved to 85%, showcasing the effectiveness of these enhancements. Hybrid models integrating BiGRU with transformer architectures demonstrated varying results. BiGRU + DeBERTa and BiGRU + ALBERT achieved the highest accuracies of 85% and 84%, respectively, highlighting the complementary strengths of bidirectional context modeling and advanced transformer-based contextual understanding. Conversely, the Hybrid BiGRU + RoBERTa model underperformed, with an accuracy of 70%, indicating potential mismatches in model synergy. These findings highlight how crucial hybridization and optimization are to reaching cutting-edge performance on NLP tasks. According to this study’s findings, the most promising approaches for fusing accuracy and efficiency are attention-based BiGRU and BiGRU–transformer hybrids, especially those that incorporate DeBERTa and ALBERT. To further improve speed, future research should concentrate on exploring task-specific optimizations and improving hybrid model integration.
Original language | English |
---|---|
Article number | 3608 |
Journal | Applied Sciences (Switzerland) |
Volume | 15 |
Issue number | 7 |
DOIs | |
Publication status | Published - Apr 2025 |
Externally published | Yes |
Keywords
- ADAM optimizer
- BiGRU
- DeBERTa
- DistilBERT
- LSTMLM
- RNN-LSTM
- attention mechanism
- bidirectional neural networks
- contextual understanding
- hybrid models
- machine learning
- natural language processing (NLP)
- optimization
- performance evaluation
- sequence modeling
- transformer architectures