Chun, Kyunghan
Daegu Catholic University

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Personal Assistant Development by CED (Canine Eye-disease Detection) Chun, Kyunghan
Indonesian Journal of Electrical Engineering and Informatics (IJEEI) Vol 11, No 4: December 2023
Publisher : IAES Indonesian Section

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52549/ijeei.v11i4.5177

Abstract

In this paper, we develop a deep learning-based canine eye disease detection and utilize it to create a dog health management system. With the recent surge in the number of pet dogs, ensuring their well-being has become crucial. We achieve this by applying lightweight deep learning methods like MobileNet and SqueezeNet to mobile devices, enabling regular monitoring of a pet's eye health. Additionally, we provide a GPS-based search feature for nearby hospitals, facilitating swift response to diseases. The validity of the developed method is demonstrated through experiments on 5 eye diseases. The results confirm the importance of considering appropriate recognition rates and recognizability metrics, as outcomes may vary depending on the applied deep learning approach.
Evaluation of Vector Font Rendering and Voice Recognition in Integrated Hearing Support Systems CHUN, KYUNGHAN
Indonesian Journal of Electrical Engineering and Informatics (IJEEI) Vol 14, No 1: March 2026 (ACCEPTED PAPERS)
Publisher : IAES Indonesian Section

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52549/ijeei.v14i1.7516

Abstract

This paper focuses on the implementation of core functionalities for a Hearing Support System (HSS) and the validation of its engineering feasibility. The system is designed to address the limitations of conventional hearing aids, specifically their restricted personalized calibration and environmental adaptation. The proposed HSS is a smartphone application-based system characterized by key functions: personalized settings derived from individual audiogram profiles, environment-specific presets, and real-time speech translation with textual display. Regarding the system's auxiliary output, the implementation of a Hangul (Korean) display is presented. A comparative analysis between a low-cost ESP32-based implementation (utilizing bitmap fonts) and a Raspberry Pi-based counterpart (employing vector fonts) empirically validates the necessity of vector fonts for enabling font scaling functions, which are crucial for users with low vision. For speech recognition, the study adopts an approach that transforms one-dimensional time-series audio waveforms into two-dimensional 'sound images,' specifically spectrograms, which serve as input for a Convolutional Neural Network (CNN). Conclusively, this research successfully prototyped the core functionalities of the HSS at a Proof of Concept (PoC) level, utilizing tools, thereby confirming its integration feasibility. Nevertheless, several key areas are identified as future tasks for practical deployment: the refinement of preset functionalities, the elimination of dependencies on external APIs, and fundamental enhancements to speech recognition performance through the adoption of deeper CNN architectures.