Lokare, Reena
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Enhancing accessibility with long short-term memory-based sign language detection systems Wadmare, Jyoti; Lokare, Reena; Wadmare, Ganesh; Kolte, Dakshita; Bhatia, Kapil; Singh, Jyotika; Agrawal, Sakshi
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 2: April 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i2.pp1355-1362

Abstract

Individuals who are deaf or experience difficulties with hearing and speech predominantly rely on sign language as their medium to communicate, which is not universally comprehended leading to obstacles in achieving effective communication. Advances in deep learning technologies in recent years have enabled the development of systems intended to autonomously interpret gestures in sign language and translate them into spoken language. This paper introduces a system built on deep learning methodologies for recognizing sign language. It uses long short-term memory (LSTM) architecture to distinguish and classify hand gestures which are static and dynamic. The system is divided into three primary components, including dataset collection, neural network assessment, and sign detection component that encompasses hand gesture extraction and sign language classification. The module to extract hand gestures makes use of recurrent neural networks (RNNs) for the detection and tracking of hand movements in video sequences. Another RNN that is incorporated by classification module categorizes these gestures into established sign language classes. Upon evaluation on a custom dataset, the proposed system attains an accuracy rate of 99.42%, thus making it visualize its promise as an assistive technology for handicapped hearing individuals. This system can further be enhanced by including further classes on sign language and real-time gesture interpretation.
Explainable artificial intelligence with anchors method for breast cancer treatment recommendation Lokare, Reena; Rathod, Mansing; More, Jyoti Sunil
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 6: December 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i6.pp4494-4501

Abstract

In the search of precision medicine for breast cancer, the integration of artificial intelligence (AI) offers unprecedented opportunities to improve diagnosis, prognosis, and treatment strategies. This paper discovers the prospective of explainable artificial intelligence (XAI) to demystify the black-box landscape of AI, fostering both transparency and trust. We introduce an XAI-based approach, anchored by the anchors explanation method, to provide interpretable predictions for breast cancer treatment. Our results demonstrate that while anchors improve the interpretability of model predictions, the precision and coverage of these explanations vary, highlighting the challenges of achieving high-fidelity explanations in complex clinical scenarios. Our findings underscore the importance of balancing the trade-off between model complexity and explainability. They advocate for the iterative development of AI systems with iterative feedback loops from clinicians to align the model's logic with clinical reasoning. We propose a framework for the clinical deployment of XAI in breast cancer. Ultimately, XAI, equipped with techniques like Anchors, holds the promise of enhancing precision medicine by making AI-assisted decisions more transparent and trustworthy, empowering clinicians and enabling patients to engage in informed discussions about their treatment options. However, anchors lag in the accuracy of rules and remains a challenge to the AI developers.