Claim Missing Document
Check
Articles

Found 2 Documents
Search

Classification Between Suicidal Ideation and Depression Through Natural Language Processing Using Recurrent Neural Network Rhenaldy Rhenaldy; Ladysa Stella Karenza; Hidayaturrahman Hidayaturrahman; Muhamad Keenan Ario
Indonesian Journal of Artificial Intelligence and Data Mining Vol 5, No 2 (2022): September 2022
Publisher : Universitas Islam Negeri Sultan Syarif Kasim Riau

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.24014/ijaidm.v5i2.17485

Abstract

The use of machine learning has been implemented in various ways, including to detect depression in individuals. However, there is hardly any research done regarding classification between suicidal ideations and depression among individuals through text analysis. Differentiating between depression and suicidal ideation is crucial, considering the difference in treatment between the two mental illness. In this paper, we propose a detection model using Recurrent Neural Network (RNN) in the hopes to improve previous models made by other researchers. By comparing the proposed model with the previous works as the baseline model, we discovered that the proposed model (RNN) performed better than the baseline models, with the accuracy of 86.81%, precision of 97.13%, recall score of 94.69%, f1 score of 95.90%, and area under the curve (AUC) score of 92.84%.
Adaptive Gradient Compression: An Information-Theoretic Analysis of Entropy and Fisher-Based Learning Dynamics Hidayaturrahman Hidayaturrahman
International Journal of Computer Science and Humanitarian AI Vol. 2 No. 2 (2025): IJCSHAI
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21512/ijcshai.v2i2.14533

Abstract

Deep neural networks require intensive computation and communication due to the large volume of gradient updates exchanged during training. This paper investigates Adaptive Gradient Compression (AGC), an information-theoretic framework that reduces redundant gradients while preserving learning stability. Two independent compression mechanisms are analyzed: an entropy-based scheme, which filters gradients with low informational uncertainty, and a Fisher-based scheme, which prunes gradients with low sensitivity to the loss curvature. Both approaches are evaluated on the CIFAR-10 dataset using a ResNet-18 model under identical hyperparameter settings. Results show that entropy-guided compression achieves a 33.8× reduction in gradient density with only a 4.4% decrease in test accuracy, while Fisher-based compression attains 14.3× reduction and smoother convergence behavior. Despite modest increases in per-iteration latency, both methods maintain stable training and demonstrate that gradient redundancy can be systematically controlled through information metrics. These findings highlight a new pathway toward information-aware optimization, where learning efficiency is governed by the informational relevance of gradients rather than their magnitude alone. Furthermore, this study emphasizes the practical significance of integrating information theory into deep learning optimization. By selectively transmitting gradients that carry higher information content, AGC effectively mitigates communication bottlenecks in distributed training environments. Experimental analyses further reveal that adaptive compression dynamically adjusts to training dynamics, providing robustness across various learning stages. The proposed framework can thus serve as a foundation for developing future low-overhead optimization methods that balance accuracy, stability, and efficiency, and crucial aspects for large-scale deep learning deployments in edge and cloud computing contexts.