Claim Missing Document
Check
Articles

Found 24 Documents
Search

The Combination of Black Hat Transform and U-Net in Image Enhancement and Blood Vessel Segmentation in Retinal Images Darmo, Cahyo Pambudi; Kesuma, Lucky Indra; Geovani, Dite
Computer Engineering and Applications Journal (ComEngApp) Vol. 12 No. 3 (2023)
Publisher : Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Diabetic Retinopathy (DR) is a disorder of the eye caused by damage to blood vessels in the retina. Damage to the retinal blood vessels can be analyzed by segmenting the blood vessels on the image. This study proposes a combination of image enhancement and blood vessel segmentation in retinal images. Retinal image enhancement is carried out using the black hat transform method to obtain a detailed view of blood vessels in retinal images. Segmentation of blood vessels in retinal images is carried out using the U-Net architecture. The results of image enhancement are measured using MSE and PSNR. This study has an MSE value below 0.05 and a PSNR above 90dB. The MSE and PSNR values obtained show that the black hat transform method is very good at image enhancement. Segmentation has an accuracy value above 0.95 and a sensitivity value above 0.85. In addition, the specificity value and f1-score are above 0.8. This shows that the proposed stages of image enhancement and blood vessel segmentation are able to accurately recognize blood vessel features in retinal images.
Combination Of Gamma Correction and Vision Transformer In Lung Infection Classification On CT-Scan Images Kesuma, Lucky Indra; Octavia , Pipin; Sari , Purwita; Batubara, Gracia Mianda Caroline; Karina, Karina
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.588

Abstract

Lung infection is an inflammatory condition of the lungs with a high mortality rate. Lung infections can be identified using CT-Scan images, where the affected areas are analyzed to determine the infection type. However, manual interpretation of CT-Scan results by medical specialists is often time-consuming, subjective, and requires a high level of accuracy. To address these challenges, this study proposes an automated classification method for lung infections using deep learning techniques. Convolutional Neural Networks (CNNs) are widely used for image classification tasks. However, CNN operates locally with limited receptive fields, making capturing global patterns in complex lung CT images challenging. CNN also struggles to model long-range pixel dependencies, which is crucial for analyzing visually similar regions in lung CT-Scans. This study uses a Vision Transformer (ViT) to overcome CNN limitations. ViT employs self-attention mechanisms to capture global dependencies across the entire image. The main contribution of this study is the implementation of ViT to enhance classification performance in lung CT-Scan images by capturing complex and global image patterns that CNN fails to model. However, ViT requires a large dataset to perform optimally. To overcome these challenges, augmentation techniques such as flipping, rotation, and gamma correction are applied to increase the amount of data without altering the important features. The dataset comprises lung CT-scan images sourced from Kaggle and is divided into Covid and Non-Covid classes. The proposed method demonstrated excellent classification performance, achieving accuracy, sensitivity, specificity, precision, and F1-Score above 90%. Additionally, the Cohen’s kappa coefficient reached 89%. These results show that the proposed method effectively classifies lung infections using CT-Scan images and has strong potential as a clinical decision-support tool, particularly in reducing diagnostic time and improving consistency in medical evaluations.
Implementasi Ensemble Weighted Voting Pada Arsitektur Densenet Mobilenet Xception Untuk Klasifikasi Penyakit Diabetic Retinopathy Kesuma, Lucky Indra; Zayanti, Des Alwine; Desiani, Anita; Sari, Purwita; Saputra, Zulhipni Reno; Ihsan, Muhammad; Muzayyadah, Fathona Nur
IDEALIS : InDonEsiA journaL Information System Vol. 9 No. 1 (2026): Jurnal IDEALIS Januari 2026
Publisher : Universitas Budi Luhur

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36080/idealis.v9i1.3714

Abstract

Convolutional Neural Network (CNN) merupakan salah satu pendekatan deep learning yang banyak digunakan pada tugas klasifikasi dan segmentasi citra, termasuk pada bidang kesehatan. Salah satu penerapan penting CNN adalah pada analisis citra Diabetic Retinopathy (DR), yaitu penyakit pada retina mata yang disebabkan oleh komplikasi diabetes jangka panjang dan dapat menyebabkan gangguan penglihatan hingga kebutaan apabila tidak terdeteksi secara dini. Namun, penggunaan arsitektur CNN tunggal sering mengalami keterbatasan, seperti overfitting, kebutuhan komputasi yang tinggi, atau kemampuan ekstraksi fitur yang belum optimal. Oleh karena itu, metode ensemble dapat digunakan untuk mengombinasikan keunggulan dari beberapa model guna meningkatkan kinerja klasifikasi. Pada penelitian ini diusulkan metode ensemble berbasis weighted voting dengan menggabungkan tiga arsitektur CNN, yaitu DenseNet, MobileNet, dan Xception, untuk klasifikasi biner Diabetic Retinopathy. DenseNet dipilih karena kemampuannya dalam mengekstraksi fitur yang kaya melalui konektivitas antar lapisan, MobileNet dipilih karena efisiensi komputasi dan ukuran model yang ringan, sedangkan Xception digunakan karena kemampuannya menyeimbangkan kedalaman jaringan dan efisiensi komputasi melalui depthwise separable convolution. Tahapan penelitian meliputi pengumpulan data, pelatihan model, pengujian, serta evaluasi kinerja. Dataset EyePACS digunakan sebagai data pelatihan, sedangkan dataset APTOS dimanfaatkan sebagai data pengujian untuk menguji kemampuan generalisasi model. Hasil eksperimen menunjukkan bahwa metode ensemble yang diusulkan menghasilkan kinerja yang baik dengan nilai akurasi sebesar 85,22%, sensitivitas 70,63%, spesifisitas 99,40%, F1-score 87,21%, serta nilai Cohen’s Kappa sebesar 0,7032. Hasil ini menunjukkan bahwa pendekatan ensemble mampu meningkatkan kinerja klasifikasi dan mengurangi permasalahan overfitting dibandingkan model CNN tunggal, serta berpotensi dikembangkan sebagai sistem pendukung keputusan untuk skrining otomatis Diabetic Retinopathy.
Aplikasi Bicara Pintar untuk Meningkatkan Kemampuan Komunikasi Siswa Tunarungu di Slb-B Ypac Palembang Desiani, Anita; Kesuma, Lucky Indra; Sartika, Diana Dewi; Padhil, Azmi Muhammad; Putri, Tyara Hestyani; Azzahra, Pasma; Muchlas, Ally; Prabudifa, Muhammad Yusuf; Setiawan, Ferdi; Naturatama, Dicky; Arsyad. H, Muhammad Iqbal
Jurnal Kreativitas Pengabdian Kepada Masyarakat (PKM) Vol 9, No 4 (2026): Volume 9 Nomor 4 (2026)
Publisher : Universitas Malahayati Lampung

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33024/jkpm.v9i4.25071

Abstract

ABSTRAK Bicara Pintar merupakan aplikasi berbasis kecerdasan buatan yang mendeteksi bahasa isyarat berdasarkan gambar, teks, dan suara yang diterapkan di SLB-B YPAC Palembang. Bicara Pintar mampu menerjemahkan dua arah antara bahasa isyarat SIBI dan BISINDO dengan bahasa Indonesia. Model YOLO yang digunakan pada aplikasi Bicara Pintar menunjukkan tingkat akurasi dan presisi mencapai 98,9%. Kegiatan pengabdian ini terdiri dari survei ke lokasi, pengumpulan dataset, pengujian dan perbaikan, launching, sosialisasi, penerapan aplikasi dan evaluasi kegiatan. Penerapan aplikasi digunakan untuk pelatihan soft skill. Pengujian aplikasi dilakukan dengan melibatkan 15 siswa teman tuli dan teman dengar yang terdiri dari 5 guru dan peserta pengabdian. Hasil pengujian dari siswa tunarungu menunjukkan input gerakan ke teks akurasinya rata-rata 46,33% dan dari teks ke gerakan 92,66%. Hasil pengujian dari guru dan tim menunjukkan akurasi dari gerakan ke teks rata-rata 46% dan dari teks ke gerakan 95%. Selain itu dari angket kepuasan diperoleh 65% menyatakan aplikasi cukup membantu untuk pelatihan soft skill, 95% menilai sudah sangat baik dan mudah digunakan, dan 18% menyarankan penambahan kosakata. Hasil ini menunjukkan Bicara Pintar mampu meningkatkan kemampuan komunikasi teman tuli dengan teman dengar khususnya siswa tunarungu SLB-B YPAC. Kata kunci: Tunarungu, SLB, Bicara Pintar, Kesenjangan, Pendidikan.  ABSTRACT Bicara Pintar is an artificial intelligence application that detects sign language based on images, text, and sound, implemented in SLB-B YPAC Palembang. Bicara Pintar is capable of two-way translation between SIBI and BISINDO sign language and Indonesian. The YOLO model used in the application shows an accuracy and precision level of 98.9%. This activity consists of a site survey, data collection, testing and improvement, launching, socialization, implementation, and evaluation. The application is used for classroom learning and soft skills training. Application testing was carried out involving 15 deaf and hearing students, consisting of 5 teachers and a team. The results from deaf students showed an average accuracy of 46,33% from gesture to text and 92,66% vice versa. The results from teachers and the team showed an average accuracy of 46% from gesture to text and 95% vice versa. Furthermore, a satisfaction questionnaire showed that 95% stated that the application was helpful for teaching and learning, 65% stated that the application was helpful enough for soft skills training, 95% considered it excellent and easy to use, and 18% suggested adding vocabulary. These results indicate that the application can improve the communication of deaf friends with hearing friends, especially students at SLB-B YPAC. Keywords: Deaf, SLB, Bicara Pintar, Disparity, Education.