Natural Language Processing (NLP)-Based Detection of Depressive Comments and Tweets: A Text Classification Approach
PDF
Full Text HTML
EPUB

Keywords

Depression
F1-score
long short-term memory (LSTM)
mental health
natural language processing (NLP)

How to Cite

Natural Language Processing (NLP)-Based Detection of Depressive Comments and Tweets: A Text Classification Approach. (2024). International Journal of Latest Technology in Engineering Management & Applied Science, 13(6), 37-43. https://doi.org/10.51583/IJLTEMAS.2024.130606

Abstract

Abstract: Depression is a major mental health problem that affects millions globally, causing significant emotional distress and impacting quality of life. With the pervasive use of social media platforms, individuals often express their thoughts and emotions through online posts, comments, and tweets, presenting an opportunity to study and detect depressive language patterns. This research utilized the dataset from Kaggle between December 2019 and December 2020, which originated largely from India. This paper presents a novel approach for detecting depressive sentiment in online discourse using Natural Language Processing (NLP) and machine learning techniques. The study aims to develop an automated system capable of accurately identifying depressive comments and tweets, facilitating early intervention and support for individuals potentially struggling with mental health challenges. The proposed methodology will be rigorously evaluated using standard performance metrics, including precision, recall, F1- score, and ROC curve. The study will also conduct qualitative analyses to gain insights into the types of textual patterns and linguistic cues most indicative of depressive sentiment. The results of our study are promising, with a maximum validation accuracy of 0.88 demonstrating the model's ability to classify depressive and non-depressive comments and tweets accurately. The outcomes of this research have significant implications for mental health monitoring and intervention strategies. By accurately detecting depressive sentiment in online discourse, healthcare professionals and support services can proactively reach out to individuals exhibiting potential signs of depression, fostering early intervention and improving overall mental health outcomes.

PDF
Full Text HTML
EPUB

References

Arras, L., Arjona-Medina, J., Widrich, M., Montavon, G., Gillhofer, M., Müller, K. R., ... & Samek, W. (2019). Explaining and interpreting LSTMs. Explainable ai: Interpreting, explaining and visualizing deep learning, 211-238.

Ayad, C. W., Bonnier, T., Bosch, B., & Read, J. (2022, October). Shapley chains: Extending Shapley values to classifier chains. In International Conference on Discovery Science (pp. 541-555). Cham: Springer Nature Switzerland.

Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in neural information Processing systems, 33, 1877-1901.

Clark, K., Luong, M. T., Le, Q. V., & Manning, C. D. (2020). Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555.

Colin, R. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR, 21(140),

Denecke, K., & Reichenpfader, D. (2023). Sentiment analysis of clinical narratives: a scoping review. Journal of Biomedical Informatics, 104336.

Devlin, J., Chang, M. W., Lee, K., & Bert, K. T. (1810). Pre-training of deep bidirectional transformers for language understanding (2018). arXiv preprint arXiv:1810.04805.

Garcia, R., Munz, T., & Weiskopf, D. (2021). Visual analytics tool for the interpretation of hidden states in recurrent neural networks. Visual Computing for Industry, Biomedicine, and Art, 4(1), 24.

Greff, K., Van Steenkiste, S., & Schmidhuber, J. (2020). On the binding problem in artificial neural networks. arXiv preprint arXiv:2012.05208.

Huang, F., Li, X., Yuan, C., Zhang, S., Zhang, J., & Qiao, S. (2021). Attention-emotion- enhanced convolutional LSTM for sentiment analysis. IEEE transactions on Neural networks and learning systems, 33(9), 4332-4345.

Karpathy, A., Johnson, J., & Fei-Fei, L. (2015). Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078.

Ma, T., Wu, Q., Jiang, H., Lin, J., Karlsson, B. F., Zhao, T., & Lin, C. Y. (2024). Decomposed Meta-Learning for Few-Shot Sequence Labeling. IEEE/ACM Transactions on Audio, Speech, and Language Processing.

Murdoch, W. J., & Szlam, A. (2017). Automatic rule extraction from long short-term memory networks. arXiv preprint arXiv:1702.02540.

Pascanu, R., Gulcehre, C., Cho, K., & Bengio, Y. (2013). How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026.

Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140), 1-67.

Tran, H. T., Nguyen, D. V., Ngoc, N. P., & Thang, T. C. (2020). Overall quality prediction for HTTP adaptive streaming using LSTM network. IEEE Transactions on Circuits and Systems for Video Technology, 31(8), 3212-3226.

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.

Xu, Z., Chen, J., Shen, J., & Xiang, M. (2022). Recursive long short-term memory network for predicting nonlinear structural seismic response. Engineering Structures, 250, 113406.

Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., & Le, Q. V. (2019). Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32.

Zheng, H. (2023). Towards human-like compositional generalization with neural models.