Setiawan, Fikri Maulana
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

PERANCANGAN SISTEM PEMANTAU DETAK JANTUNG PASIEN MENGGUNAKAN SERVER WEB BERBASIS INTERNET OF THINGS (IoT) PADA KLINIK BIDAN NINING Setiawan, Fikri Maulana; Purwantoro, Purwantoro; Garno, Garno
Jurnal Informatika dan Teknik Elektro Terapan Vol. 13 No. 3 (2025)
Publisher : Universitas Lampung

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.23960/jitet.v13i3.6677

Abstract

Dalam konteks pelayanan kesehatan di Indonesia, khususnya pada fasilitas kesehatan tingkat pertama seperti klinik bidan, keterbatasan peralatan medis modern sering menjadi hambatan dalam memberikan layanan kesehatan optimal. Penelitian ini dilatarbelakangi oleh tantangan spesifik di Klinik Bidan Nining yang meliputi ketiadaan alat pemantauan detak jantung, tingginya angka rujukan ke rumah sakit, serta beban ekonomi dan waktu yang ditanggung pasien untuk mendapatkan pemeriksaan lanjutan. Solusi yang diusulkan adalah sistem terintegrasi yang menggabungkan sensor detak jantung MAX30102 dengan mikrokontroler NodeMCU ESP8266 yang terhubung ke web server, memungkinkan pemantauan dan analisis data kesehatan secara real-time.
Pendeteksi Bahasa Isyarat Menggunakan TensorFlow dengan Metode Convolutional Neural Network Saputra, Reza Aditya; Ryansyah, Eddy; Setiawan, Fikri Maulana; Rozikin, Chaerur
Jurnal Informatika Dan Rekayasa Komputer(JAKAKOM) Vol 5 No 2 (2025): JAKAKOM Vol 5 No 2 SEPTEMBER 2025
Publisher : LPPM Universitas Dinamika Bangsa

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33998/jakakom.2025.5.2.2386

Abstract

Sign language recognition plays a vital role in facilitating communication for individuals with hearing impairments. This study proposes a Convolutional Neural Network (CNN) model trained to recognize patterns in sign language images with the aim of improving the accuracy and efficiency of sign language recognition systems. The model was trained in two stages with the first training session achieving a validation accuracy of around 63%, while the second training session yielded an impressive validation accuracy exceeding 92% at epoch 29. This significant improvement demonstrates the model’s ability to effectively learn and generalize complex patterns in sign language images, signaling its potential for practical applications in sign language interpretation. The high accuracy achieved by the CNN model demonstrates its suitability for use in a variety of real-world scenarios, such as assistive technology for the deaf community or automation systems requiring hand gesture recognition. Thus, the trained CNN model has the potential to be a valuable tool in improving the accessibility and efficiency of communication for individuals who rely on sign language.