Claim Missing Document
Check
Articles

Found 2 Documents
Search

A Non-Invasive Allergy Detection using Convolutional Neural Network Model Aripin; Badia, Giulia Salzano; Safira, Intan
(JAIS) Journal of Applied Intelligent System Vol. 10 No. 1 (2025): April 2025
Publisher : LPPM Universitas Dian Nuswantoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62411/jais.v10i1.12783

Abstract

Skin allergy detection is critical to detect allergies that trigger serious reactions such as anaphylaxis, so people can avoid allergens and reduce the risk of complications such as anaphylactic shock. Therefore, early allergy detection screening is essential to determine the risk of allergies. This research aims to develop a system to detect skin allergies caused by food, through sensors applied to human skin using the Convolutional Neural Network (CNN) model. The research steps include literature studies, data acquisition, preprocessing, learning processes, and testing. The developed system uses a camera to capture allergic reactions on the skin. Data acquisition consists of two types of data, namely primary data and secondary data. Primary data acquisition is done by taking images of normal and allergic patient skin. Meanwhile, secondary data acquisition is obtained from Kaggle. The captured images are processed by image processing and analyzed using the CNN model. The image dataset consists of four classes, namely atopic, angioedema, normal skin, and urticaria. The CNN model consists of several layers, including convolutional layers, pooling, and fully connected layers. The results of the research showed that the prototype product can detect changes in the skin surface due to allergic reactions, such as redness or swelling, quickly and accurately. Testing the learning process with the CNN model resulted in an accuracy rate of 92%. Meanwhile, the accuracy results of testing prototype products on patients with skin allergies were 93%. It shows that the system can detect types of allergies on the skin accurately and efficiently. This system provides a practical and fast solution for the public to detect allergies, while contributing to the advancement of medical technology.Keywords - social robots, adaptive learning, reinforcement learning, human-robot interaction, sensor fusion, educational robotics
MK–TripNet: A Deep Learning Framework for Real-Time Multi-Class Lung Sound Classification Erini, Widya Surya; Thomas, Gracia Putri; Badia, Giulia Salzano; Rahadian, Arief; Raharjo, Sofyan Budi; Wulandari, Sari Ayu
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1403

Abstract

Respiratory diseases such as asthma, pneumonia, and Chronic Obstructive Pulmonary Disease (COPD) remain major global health challenges, particularly in resource-limited settings where access to pulmonary specialists and early diagnostic tools is limited. Automatic lung sound classifications have emerged as a promising non-invasive screening approach; however, existing methods often rely on single-scale feature extraction, conventional loss functions, and offline analysis, which limit their discriminative capability and real-time applicability. The aim of this study is to develop and evaluate a deep learning framework for real-time multi-class lung sound classifications that improves discriminative representation and temporal sensitivity. To address limitations, this study proposes MK-TripNet, a novel deep learning architecture designed to integrate multi-scale feature extraction, discriminative embedding learning, and real-time inference within a unified framework. The main contribution of this work is the unified integration of a Multi-Kernel convolutional architecture, Triplet Loss-based embedding learning, and Sliding Window segmentation within a single end-to-end framework, enabling accurate segment-level lung sound classifications in real-time scenarios. Unlike prior approaches, the proposed method simultaneously captures fine-grained temporal patterns and broader spectral characteristics while explicitly maximizing inter-class separability in the embedding space. The proposed model was evaluated using a newly constructed dataset comprising 1,409 lung sound segments obtained from primary digital stethoscope recordings and publicly available respiratory sound databases. Experimental results demonstrate that MK-TripNet consistently outperforms several strong baseline models, including CNN-BiGRU, CNN-BiGRU-UMAP, and VGGish-Triplet, achieving an accuracy of 89.1%, an F1-score of 0.89, and a recall of 0.88. Ablation studies further confirm that the combined use of Multi-Kernel convolution, Triplet Loss, and Sliding Window segmentation yields the most robust and generalizable performances. These findings highlight the clinical potential of MK-TripNet for real-time digital auscultation and point-of-care respiratory screening, particularly in resource-limited and telemedicine settings.