This study addresses the pressing need for safe and personalized digital mental health support by mitigating the inherent risk of Large Language Models (LLMs) generating unsafe or unethical responses during high-risk psychological crises. We developed BATINARA, a chatbot system based on a Neuro-Symbolic Hybrid framework. This architecture integrates a Predictive Module (IndoBERT for crisis detection, Random Forest for multi-label emotion classification) with a Generative LLM Module (OpenAI API). Ethical control is enforced by the Dynamic Context Integration Logic (D-CIL), which utilizes clinical rules to uphold the Principle of Nonmaleficence. Key results demonstrate the system’s ability to: (1) Enforce safety protocols through the automatic override of LLM responses when suicidal ideation is detected (Recall IndoBERT 0.9977). (2) Achieve high contextual accuracy in multi-label emotion detection (F1 = 0.94), which supports dynamic personalization via Dynamic Prompt Modulation based on specific therapeutic styles and user PHQ-9/GAD-7 clinical scores. (3) Enhance interaction transparency through the real-time visualization of detected emotions. This Neuro-Symbolic hybrid approach proves effective in mitigating clinical risks associated with generative AI, resulting in adaptive and ethically sound therapeutic interactions.
Copyrights © 2026