Claim Missing Document
Check
Articles

Found 3 Documents
Search

Hand Gesture to Control Virtual Keyboard using Neural Network Anandika, Arrya; Rusydi, Muhammad Ilhamdi; Utami, Pepi Putri; Hadelina, Rizka; Sasaki, Minoru
JITCE (Journal of Information Technology and Computer Engineering) Vol. 7 No. 01 (2023)
Publisher : Universitas Andalas

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.25077/jitce.7.01.40-48.2023

Abstract

Disability is one of a person's physical and mental conditions that can inhibit normal daily activities. One of the disabilities that can be found in disability is speech without fingers. Persons with disabilities have obstacles in communicating with people around both verbally and in writing. Communication tools to help people with disabilities without finger fingers continue to be developed, one of them is by creating a virtual keyboard using a Leap Motion sensor. The hand gestures are captured using the Leap Motion sensor so that the direction of the hand gesture in the form of pitch, yaw, and roll is obtained. The direction values are grouped into normal, right, left, up, down, and rotating gestures to control the virtual keyboard. The amount of data used for gesture recognition in this study was 5400 data consisting of 3780 training data and 1620 test data. The results of data testing conducted using the Artificial Neural Network method obtained an accuracy value of 98.82%. This study also performed a virtual keyboard performance test directly by typing 20 types of characters conducted by 15 respondents three times. The average time needed by respondents in typing is 5.45 seconds per character.
Electrooculography and Camera-Based Control of a Four-Joint Robotic Arm for Assistive Tasks Rusydi, Muhammad Ilhamdi; Gultom, Andre Paskah; Jordan, Adam; Nurhadi, Rahmad Novan; Windasari, Noverika; Sasaki, Minoru; Ramlee, Ridza Azri
Buletin Ilmiah Sarjana Teknik Elektro Vol. 7 No. 4 (2025): December
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12928/biste.v7i4.14305

Abstract

Individuals with severe motor impairments face challenges in performing daily manipulation tasks independently. Existing assistive robotic systems show limited accuracy (typically 85–92%) and low intuitive control, requiring extensive training. This study presents a control system integrating electrooculography (EOG) signals with real-time computer vision feedback for natural, high-precision control of a 4-degrees-of-freedom (4-DOF) robotic manipulator in assistive applications. The system uses an optimized K-Nearest Neighbors (KNN) algorithm to classify six eye-movement categories with computational efficiency and real-time performance. Computer-vision modules map object coordinates and provide feedback integrated with inverse kinematics for positioning. Validation with 10 able-bodied participants (aged 18–22) employed standardized protocols under controlled laboratory conditions. The KNN classifier achieved 98.17% accuracy, 98.47% true-positive and 1.53% false-negative rates. Distance-measurement error averaged 1.5 mm (± 1.6 mm). Inverse-kinematics positioning attained sub-millimeter precision with 0.64 mm mean absolute error (MAE) for frontal retrieval and 1.58 mm for overhead retrieval. Operational success rates reached 99.48% for frontal and 97.96% for top-down retrieval tasks. The system successfully completed object detection, retrieval, transport, and placement across ten locations. These findings indicate a significant advancement in EOG-based assistive robotics, achieving higher accuracy than conventional systems while maintaining intuitive user control. The integration shows promising potential for rehabilitation centers and assistive environments, though further validation under diverse conditions, including latency and fatigue, is needed.
Image Presentation Method for Human Machine Interface Using Deep Learning Object Recognition and P300 Brain Wave Nakajima, Rio; Rusydi, Muhammad Ilhamdi; Ramadhani, Salisa Asyarina; Muguro, Joseph; Matsushita, Kojiro; Sasaki, Minoru
JOIV : International Journal on Informatics Visualization Vol 6, No 3 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.3.949

Abstract

Welfare robots, as a category of robotics, seeks to improve the quality of life of the elderly and patients by availing a control mechanism to enable the participants to be self-dependent. This is achieved by using man-machine interfaces that manipulate certain external processes like feeding or communicating. This research aims to realize a man-machine interface using brainwave combined with object recognition applicable to patients with locked-in syndrome. The system utilizes a camera with pretrained object-detection system that recognizes the environment and displays the contents in an interface to solicit a choice using P300 signals. Being a camera-based system, field of view and luminance level were identified as possible influences. We designed six experiments by adapting the arrangement of stimuli (triangular or horizontal) and brightness/colour levels. The results showed that the horizontal arrangement had better accuracy than the triangular method. Further, colour was identified as a key parameter for the successful discrimination of target stimuli. From the paper, the precision of discrimination can be improved by adopting a harmonized arrangement and selecting the appropriate saturation/brightness of the interface.