Claim Missing Document
Check
Articles

Found 2 Documents
Search
Journal : Journal of Applied Data Sciences

Data-Driven Evaluation of a Gamified Breath-Holding Training Application to Improve CT Scan Quality and Reduce Patient Anxiety P, Vinoth Kumar; M, Ganga; K, Vijayakumar; K, Umamaheswari; Devarajan, Gunapriya; Batumalay, M
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.804

Abstract

This study presents the development and evaluation of Breathe Well, an innovative three-tiered Graphical User Interface (GUI) application designed to address motion-induced step artifacts and patient anxiety during Computed Tomography (CT) scans. The core idea of the application is to combine relaxation techniques, guided breathing exercises, and gamified training modules within a single interactive platform that allows patients to practice breath-holding and anxiety control prior to scanning. The objective is to enhance patient cooperation, reduce involuntary movement, and improve overall image quality while minimizing the time healthcare staff spend on manual breath-hold instruction. The study involved a comparative analysis between a control group and an intervention group trained using the Breathe Well system. Quantitative results demonstrated a significant improvement in imaging outcomes, with the mean artifact score decreasing from 3.1 ± 0.8 in the control group to 2.1 ± 0.7 in the intervention group (p 0.01). Psychological assessment using the State-Trait Anxiety Inventory (STAI) revealed a marked reduction in patient anxiety, with mean scores declining from 48.6 ± 6.4 before training to 38.2 ± 5.8 after using the application (p 0.01). Qualitative feedback further confirmed increased patient confidence, comfort, and comprehension of CT procedures. The findings indicate that integrating gamified digital interventions into pre-scan preparation significantly improves both patient experience and diagnostic precision. The novelty of this research lies in the creation of a self-guided, multi-level digital platform that bridges behavioral training and imaging technology, offering a scalable, patient-centered solution for modern radiology workflows.
Leveraging Generative AI in Vehicles for Enhanced Driver Safety and Advanced Communication Systems P, Vinoth Kumar; T, Sri Anadha Ganesh; Batumalay, M; Kumar, S N; Devarajan, Gunapriya; K, Bhuvaneshwari; T, Kesavan; S, Lakshmi Praba; S, Nandhanaa K
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.809

Abstract

This paper proposes an integrated artificial intelligence–based driver assistance system for electric vehicles (EVs) that combines computer vision–based drowsiness detection with a generative artificial intelligence (GenAI)–driven conversational interaction framework to enhance driver safety and human–vehicle interaction. The primary objective of this work is to reduce fatigue-related driving risks while enabling natural, hands-free, and context-aware communication between the driver and the vehicle. The core idea is to tightly couple real-time driver state monitoring with intelligent conversational feedback, allowing safety alerts and voice interactions to adapt dynamically to the driver’s condition. Driver drowsiness is detected using non-intrusive visual indicators, namely eye closure duration and blink rate, extracted from an in-vehicle camera. A drowsy state is identified when eye closure exceeds 10 s or when the blink rate exceeds 6 blinks within a 6 s interval. Upon detection, the system generates multi-modal alerts consisting of audio warnings and vibration feedback, while a GenAI-based natural language processing module provides real-time, hands-free voice interaction. Experimental evaluation was conducted on an ESP32-based embedded prototype across five predefined driving scenarios representing normal and fatigued conditions. The results show stable face and eye detection under normal driving and achieved 100% correct alert triggering in all drowsiness-related cases (3 out of 5 scenarios), with zero false positives observed during non-drowsy conditions (2 out of 5 scenarios). The system demonstrated consistent real-time response and reliable alert activation under fatigue conditions. The main contribution and novelty of this research lie in the real-time integration of generative AI–driven conversational intelligence with embedded computer vision–based drowsiness detection within a unified, resource-constrained platform, which is rarely addressed jointly in existing systems. Overall, the proposed framework provides a practical, scalable, and human-centered solution for intelligent driver assistance in semi-autonomous and future autonomous EV environments.