Wadmare, Jyoti
Unknown Affiliation

Published : 3 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 3 Documents
Search

Transparent precision: Explainable AI empowered breast cancer recommendations for personalized treatment Lokare, Reena R; Wadmare, Jyoti; Patil, Sunita; Wadmare, Ganesh; Patil, Darshan
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 3: September 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i3.pp2694-2702

Abstract

Breast cancer stands as a prevalent global concern, prompting extensive research into its origins and personalized treatment through Artificial Intelligence (AI)-driven precision medicine. However, AI's black box nature hinders result acceptance. This study delves into Explainable AI (XAI) integration for breast cancer precision medicine recommendations. Transparent AI models, fuelled by patient data, enable personalized treatment recommendations. Techniques like feature analysis and decision trees enhance transparency, fostering trust between medical practitioners and patients. This harmonizes AI's potential with the imperative for clear medical decisions, propelling breast cancer care within the precision medicine era. This research work is dedicated to leveraging clinical and genomic data from samples of metastatic breast cancer. The primary aim is to develop a machine learning (ML) model capable of predicting optimal treatment approaches, including but not limited to hormonal therapy, chemotherapy, and anti-HER2 therapy. The objective is to enhance treatment selection by harnessing advanced computational techniques and comprehensive data analysis. A decision tree model developed here for the prediction of suitable personalized treatment for breast cancer patients achieves 99.87% overall prediction accuracy. Thus, the use of XAI in healthcare will build trust in doctors as well as patients.
Enhancing accessibility with long short-term memory-based sign language detection systems Wadmare, Jyoti; Lokare, Reena; Wadmare, Ganesh; Kolte, Dakshita; Bhatia, Kapil; Singh, Jyotika; Agrawal, Sakshi
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 2: April 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i2.pp1355-1362

Abstract

Individuals who are deaf or experience difficulties with hearing and speech predominantly rely on sign language as their medium to communicate, which is not universally comprehended leading to obstacles in achieving effective communication. Advances in deep learning technologies in recent years have enabled the development of systems intended to autonomously interpret gestures in sign language and translate them into spoken language. This paper introduces a system built on deep learning methodologies for recognizing sign language. It uses long short-term memory (LSTM) architecture to distinguish and classify hand gestures which are static and dynamic. The system is divided into three primary components, including dataset collection, neural network assessment, and sign detection component that encompasses hand gesture extraction and sign language classification. The module to extract hand gestures makes use of recurrent neural networks (RNNs) for the detection and tracking of hand movements in video sequences. Another RNN that is incorporated by classification module categorizes these gestures into established sign language classes. Upon evaluation on a custom dataset, the proposed system attains an accuracy rate of 99.42%, thus making it visualize its promise as an assistive technology for handicapped hearing individuals. This system can further be enhanced by including further classes on sign language and real-time gesture interpretation.
Transforming images into words: optical character recognition solutions for image text extraction Wadmare, Jyoti; Patil, Sunita; Kolte, Dakshita; Bhatia, Kapil; Desai, Palak; Wadmare, Ganesh
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 4: August 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i4.pp3412-3420

Abstract

Optical character recognition (OCR) tool is a boon and greatest advancement in today’s emerging technology which has proven its remarkability in recent years by making it easier for humans to convert the textual information in images or physical documents into text data making it useful for analysis, automation processes and improvised productivity for different purposes. This paper presents the designing, development and implementation of a novel OCR tool aiming at text extraction and recognition tasks. The tool incorporates advanced techniques such as computer vision and natural language processing (NLP) which offer powerful performance for various document types. The performance of the tool is subject to metrics like analysis, accuracy, speed, and document format compatibility. The developed OCR tool provides an accuracy of 98.8% upon execution providing a character error rate of 2.4% and word error rate (WER) of 2.8%. OCR tool finds its applications in document digitization, personal identification, archival of valuable documents, processing of invoices, and other documents. OCR tool holds an immense amount of value for researchers, practitioners and many organizations which seek effective techniques for relevant and accurate text extraction and recognition tasks.