Claim Missing Document
Check
Articles

Found 4 Documents
Search

Eye disease classification using deep learning convolutional neural networks Rachmawanto, Eko Hari; Sari, Christy Atika; Krismawan, Andi Danang; Erawan, Lalang; Sari, Wellia Shinta; Laksana, Deddy Award Widya; Adi, Sumarni; Yaacob, Noorayisahbe Mohd
Journal of Soft Computing Exploration Vol. 5 No. 4 (2024): December 2024
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v5i4.493

Abstract

This study begins with the analysis of the growing challenge of accurately diagnosing eye diseases, which can lead to severe visual impairment if not identified early. To address this issue, we propose a solution using Deep Learning Convolutional Neural Networks (CNNs) enhanced by transfer learning techniques. The dataset utilized in this study comprises 4,217 images of eye diseases, categorized into four classes: Normal (1,074 images), Glaucoma (1,007 images), Cataract (1,038 images), and Diabetic Retinopathy (1,098 images). We implemented a CNN model using TensorFlow to effectively learn and classify these diseases. The evaluation results demonstrate a high accuracy of 95%, with precision and recall rates significantly varying across classes, particularly achieving 100% for Diabetic Retinopathy. These findings highlight the potential of CNNs to improve diagnostic accuracy in ophthalmology, facilitating timely interventions and enhancing patient outcomes. For future research, expanding the dataset to include a wider variety of ocular diseases and employing more sophisticated deep learning techniques could further enhance the model's performance. Integrating this model into clinical practice could significantly aid ophthalmologists in the early detection and management of eye diseases, ultimately improving patient care and reducing the burden of ocular disorders.
Handwritten text recognition system using Raspberry Pi with OpenCV TensorFlow Alsayaydeh, Jamil Abedalrahim Jamil; Jie, Tommy Lee Chuin; Bacarra, Rex; Ogunshola, Benny; Yaacob, Noorayisahbe Mohd
International Journal of Electrical and Computer Engineering (IJECE) Vol 15, No 2: April 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v15i2.pp2291-2303

Abstract

Handwritten text recognition (HTR) technology has brought about a revolution in the way handwritten data is converted and analyzed. This proposed work focuses on developing a HTR system using deep learning through advanced deep learning architecture and techniques. The aim is to create a model for real-time analysis and detection of handwritten texts. The proposed deep learning architecture that is convolutional neural networks (CNNs), is investigated and implemented with tools like OpenCV and TensorFlow. The model is trained on large handwritten datasets to enhance recognition accuracy. The system’s performance is evaluated based on accuracy, precision, real-time capabilities, and potential for deployment on platforms like Raspberry Pi. The actual outcome is a robust HTR system that can convert handwritten text to digital formats accurately. The developed system has achieved a high accuracy rate of 91.58% in recognizing English alphabets and digits and outperformed other models with 81.77% mAP, 78.85% precision, 79.32% recall, 79.46% F1-Score, and 82.4% receiver operating characteristic (ROC). This research contributes to the advancement of HTR technology by enhancing its precision and utility.
Optimized Visualization of Digital Image Steganography using Least Significant Bits and AES for Secret Key Encryption Jatmoko, Cahaya; Sinaga, Daurat; Lestiawan, Heru; Astuti, Erna Zuni; Sari, Christy Atika; Shidik, Guruh Fajar; Andono, Pulung Nurtantio; Yaacob, Noorayisahbe Mohd
Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control Vol. 10, No. 3, August 2025
Publisher : Universitas Muhammadiyah Malang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22219/kinetik.v10i3.2252

Abstract

Data hiding is a technique used to embed secret information into a cover medium, such as an image, audio, or video, with minimal distortion, ensuring that the hidden data remains imperceptible to an observer. The key challenge lies in embedding secret information securely while maintaining the original quality of the host medium. In image-based data hiding, this often means ensuring the hidden data cannot be easily detected or extracted while still preserving the visual integrity of the host image. To overcome this, we propose a combination of AES (Advanced Encryption Standard) encryption and Least Significant Bit (LSB) steganography. AES encryption is used to protect the secret images, while the LSB technique is applied to embed the encrypted images into the host images, ensuring secure data transfer. The dataset includes grayscale 256x256 images, specifically "aerial.jpg," "airplane.jpg," and "boat.jpg" as host images, and "Secret1," "Secret2," and "Secret3" as the encrypted secret images. Evaluation metrics such as Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Unified Average Changing Intensity (UACI), and Number of Pixels Changed Rate (NPCR) were used to assess both the image quality and security of the stego images. The results showed low MSE (0.0012 to 0.0013), high PSNR (58 dB), and consistent UACI and NPCR values, confirming both the preservation of image quality and the effectiveness of encryption for securing the secret data.
CLASSIFICATION OF ORGANIC AND NON-ORGANIC WASTE WITH CNN-MOBILENET-V2 Oktayaessofa, Eqania; Sari, Christy Atika; Rachmawanto, Eko Hari; Yaacob, Noorayisahbe Mohd
Jurnal Teknik Informatika (Jutif) Vol. 5 No. 4 (2024): JUTIF Volume 5, Number 4, August 2024
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2024.5.4.2165

Abstract

Data from the Ministry of Environment and Forestry shows that the amount of organic and non-organic waste in 2023 has started to decline compared to the previous year. However, waste management in the central landfill is still not optimal. This is a problem for the community and the environment because it can cause pollution and disrupt public health around the disposal site. The reason for the difficulty of waste management at the landfill is that people still dispose of waste without separating it first. In addition, it is also due to a lack of public awareness and knowledge. One of the things that can be done to help overcome the problem of waste and its management is to develop an application that can help people understand the importance of waste selection and facilitate socialization in the community. For that, a model is needed that can classify waste based on its type with accurate accuracy. In this study, we propose a deep learning model, CNN with mobilenetV2 architecture, to classify organic and non-organic waste. This model uses a dataset consisting of 4380 images of organic and non-organic waste. Then 3 preprocessing stages were carried out, namely resize, normalization, and augmentation. From this process, data training was carried out and researchers obtained model evaluation results with 98.47% accuracy, 97% precision, 97% recall, and 97% F1 Score evaluation results. These results show that the proposed model is categorized as excellent.