Natural Language Processing (NLP)-Based Detection of Depressive Comments and Tweets: A Text Classification Approach
Article Sidebar
Main Article Content
Abstract: Depression is a major mental health problem that affects millions globally, causing significant emotional distress and impacting quality of life. With the pervasive use of social media platforms, individuals often express their thoughts and emotions through online posts, comments, and tweets, presenting an opportunity to study and detect depressive language patterns. This research utilized the dataset from Kaggle between December 2019 and December 2020, which originated largely from India. This paper presents a novel approach for detecting depressive sentiment in online discourse using Natural Language Processing (NLP) and machine learning techniques. The study aims to develop an automated system capable of accurately identifying depressive comments and tweets, facilitating early intervention and support for individuals potentially struggling with mental health challenges. The proposed methodology will be rigorously evaluated using standard performance metrics, including precision, recall, F1- score, and ROC curve. The study will also conduct qualitative analyses to gain insights into the types of textual patterns and linguistic cues most indicative of depressive sentiment. The results of our study are promising, with a maximum validation accuracy of 0.88 demonstrating the model's ability to classify depressive and non-depressive comments and tweets accurately. The outcomes of this research have significant implications for mental health monitoring and intervention strategies. By accurately detecting depressive sentiment in online discourse, healthcare professionals and support services can proactively reach out to individuals exhibiting potential signs of depression, fostering early intervention and improving overall mental health outcomes.
Downloads
Downloads
References
Arras, L., Arjona-Medina, J., Widrich, M., Montavon, G., Gillhofer, M., Müller, K. R., ... & Samek, W. (2019). Explaining and interpreting LSTMs. Explainable ai: Interpreting, explaining and visualizing deep learning, 211-238. DOI: https://doi.org/10.1007/978-3-030-28954-6_11
Ayad, C. W., Bonnier, T., Bosch, B., & Read, J. (2022, October). Shapley chains: Extending Shapley values to classifier chains. In International Conference on Discovery Science (pp. 541-555). Cham: Springer Nature Switzerland. DOI: https://doi.org/10.1007/978-3-031-18840-4_38
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in neural information Processing systems, 33, 1877-1901.
Clark, K., Luong, M. T., Le, Q. V., & Manning, C. D. (2020). Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555.
Colin, R. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR, 21(140),
Denecke, K., & Reichenpfader, D. (2023). Sentiment analysis of clinical narratives: a scoping review. Journal of Biomedical Informatics, 104336. DOI: https://doi.org/10.1016/j.jbi.2023.104336
Devlin, J., Chang, M. W., Lee, K., & Bert, K. T. (1810). Pre-training of deep bidirectional transformers for language understanding (2018). arXiv preprint arXiv:1810.04805.
Garcia, R., Munz, T., & Weiskopf, D. (2021). Visual analytics tool for the interpretation of hidden states in recurrent neural networks. Visual Computing for Industry, Biomedicine, and Art, 4(1), 24. DOI: https://doi.org/10.1186/s42492-021-00090-0
Greff, K., Van Steenkiste, S., & Schmidhuber, J. (2020). On the binding problem in artificial neural networks. arXiv preprint arXiv:2012.05208.
Huang, F., Li, X., Yuan, C., Zhang, S., Zhang, J., & Qiao, S. (2021). Attention-emotion- enhanced convolutional LSTM for sentiment analysis. IEEE transactions on Neural networks and learning systems, 33(9), 4332-4345. DOI: https://doi.org/10.1109/TNNLS.2021.3056664
Karpathy, A., Johnson, J., & Fei-Fei, L. (2015). Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078.
Ma, T., Wu, Q., Jiang, H., Lin, J., Karlsson, B. F., Zhao, T., & Lin, C. Y. (2024). Decomposed Meta-Learning for Few-Shot Sequence Labeling. IEEE/ACM Transactions on Audio, Speech, and Language Processing. DOI: https://doi.org/10.1109/TASLP.2024.3372879
Murdoch, W. J., & Szlam, A. (2017). Automatic rule extraction from long short-term memory networks. arXiv preprint arXiv:1702.02540.
Pascanu, R., Gulcehre, C., Cho, K., & Bengio, Y. (2013). How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026.
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140), 1-67.
Tran, H. T., Nguyen, D. V., Ngoc, N. P., & Thang, T. C. (2020). Overall quality prediction for HTTP adaptive streaming using LSTM network. IEEE Transactions on Circuits and Systems for Video Technology, 31(8), 3212-3226. DOI: https://doi.org/10.1109/TCSVT.2020.3035824
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
Xu, Z., Chen, J., Shen, J., & Xiang, M. (2022). Recursive long short-term memory network for predicting nonlinear structural seismic response. Engineering Structures, 250, 113406. DOI: https://doi.org/10.1016/j.engstruct.2021.113406
Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., & Le, Q. V. (2019). Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32.
Zheng, H. (2023). Towards human-like compositional generalization with neural models.
All articles published in our journal are licensed under CC-BY 4.0, which permits authors to retain copyright of their work. This license allows for unrestricted use, sharing, and reproduction of the articles, provided that proper credit is given to the original authors and the source.