Claim Missing Document
Check
Articles

Found 28 Documents
Search

Implementation of YOLOv7 Model for Human Detection in Difficult Conditions B, Arijal; Sunyoto, Andi; Hanafi, M.
Journal of Electrical Engineering and Computer (JEECOM) Vol 7, No 1 (2025)
Publisher : Universitas Nurul Jadid

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33650/jeecom.v7i1.10662

Abstract

The rapid development of artificial intelligence technology in recent decades has led to the development of highly efficient object detection algorithms, including human detection under difficult conditions. Human detection is one of the major challenges in computer vision as it involves various complex factors such as obstructed human objects, pose variations, small low-resolution human objects, as well as the presence of fake human objects such as statues or images. This research uses the SLR (Systematic Literature Review) method to determine the algorithm used, namely YOLOv7. The three YOLOv7 models tested in this study are YOLOv7x.pt, YOLOv7-w6-person.pt, and YOLOv7-w6-pose.pt. These models were selected based on their excellence in detecting human objects and their relevance for complex scenarios. Tests were conducted using 100 images obtained from the internet and divided into four categories of human objects under difficult conditions, which represent various challenges in human detection. Analysis was performed using convusion matrix to evaluate performance metrics such as accuracy, precision, recall, and F1-score. Based on the test results, the YOLOv7-w6-person.pt model showed the best overall performance, especially in detecting humans in obstructed conditions and complex lighting with a precision of 90.4%, Recall 88.7%, and F1-Score 89.5%. This model has higher accuracy, precision, and F1-score than the other models, making it a reliable choice for human detection in difficult scenarios. These findings not only demonstrate the relevance of YOLOv7 as a reliable human detection algorithm, but also provide a basis for further optimization of YOLOv7-based human detection systems, both through improving the model architecture and adapting to more specific datasets. This research makes an important contribution to the development of human detection technologies for real-world applications, such as surveillance, crowd analysis, and automated safety systems.
LUNGINFORMER: A Multiclass of lung pneumonia diseases detection based on chest X-ray image using contrast enhancement and hybridization inceptionresnet and transformer Hanafi, Hanafi
International Journal of Advances in Intelligent Informatics Vol 11, No 2 (2025): May 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i2.1964

Abstract

Lung pneumonia is categorically a serious disease on Earth. In December 2019, COVID-19 was first identified in Wuhan, China. COVID-19 caused severe lung pneumonia. The majority of lung pneumonia diseases are diagnosed using traditional medical tools and specialized medical personnel. This process is both time-consuming and expensive. To address the problem, many researchers have employed deep learning algorithms to develop an automated detection system for pneumonia. Deep learning faces the issue of low-quality X-ray images and biased X-ray image information. The X-ray image is the primary material for creating a transfer learning model. The problem in the dataset led to inaccurate classification results. Many previous works with a deep learning approach have faced inaccurate results. To address the situation mentioned, we propose a novel framework that utilizes two essential mechanisms: advanced image contrast enhancement based on Contrast Limited Adaptive Histogram Equalization (CLAHE) and a hybrid deep learning model combining InceptionResNet and Transformer. Our novel framework is named LUNGINFORMER. The experiment report demonstrated LUNGINFORMER achieved an accuracy of 0.98, a recall of 0.97, an F1-score of 0.98, and a precision of 0.96. According to the AUC test, LUNGINFORMER achieved a tremendous performance with a score of 1.00 for each class. We believed that our performance model was influenced by contrast enhancement and a hybrid deep learning model.
Pengembangan Model Klasterisasi Topik Hadis Bukhari Muslim Menggunakan BERT dengan Penambahan Fitur Semantik Asy'ari, Ahmad Hasyim; Hanafi, Muhammad
Indonesian Journal Computer Science Vol. 4 No. 2 (2025): Oktober 2025
Publisher : LPPM Universitas Bina Sarana Informatika

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31294/ijcs.v4i2.8931

Abstract

Klastering hadis merupakan tugas penting dalam studi Islam, mengingat sifat korpus hadis yang luas dan kompleks. Pendekatan pengelompokan tradisional sering kali kesulitan untuk menangkap konteks semantik yang mendalam dalam hadis, yang menyebabkan pengelompokan topik menjadi kurang akurat. Kemajuan terkini dalam Natural Language Processing (NLP), seperti model Bidirectional Encoder Representations from Transformers (BERT), telah menunjukkan hasil yang menjanjikan dalam mengatasi tantangan ini dengan menyediakan penyematan kontekstual yang kaya. Namun, penggunaan BERT secara tunggal dapat mengabaikan fitur linguistik yang penting, yang berpotensi membatasi kinerja pengelompokan. Studi ini mengusulkan model pengelompokan yang disempurnakan untuk koleksi hadis Sahih Bukhari dan Sahih Muslim, yang mengintegrasikan penyematan BERT dengan fitur semantik tambahan, termasuk panjang teks, Term Frequency (TF), dan Inverse Document Frequency (IDF). Dengan menggunakan kerangka BERTopic, pendekatan ini menangkap hubungan yang bernuansa antara hadis, yang memberikan hasil pengelompokan yang lebih akurat secara kontekstual. Eksperimen menunjukkan bahwa metode terintegrasi ini secara signifikan meningkatkan kinerja pengelompokan, seperti yang ditunjukkan oleh silhouette score dengan nilai -0.1 dan davies-bouldin index 2.6. Sedangkan tanpa terintegrasi menunjukkan nilai rendah dengan silhouette score dengan nilai -0.145 dan davies-bouldin index 6.6.  Sehingga pengembangan ini menawarkan metode yang lebih tepat untuk pengelompokan topik dalam studi Islam, yang memfasilitasi organisasi dan pemahaman yang lebih baik tentang teks hadis.
Optimasi Deteksi Penipuan Kartu Kredit Menggunakan Regresi Logistik dengan Particle Swarm Optimization Lopes, David Dos Santos Pinto; Hanafi , Muhammad; Nugraha, Icha Nura
Indonesian Journal Computer Science Vol. 4 No. 2 (2025): Oktober 2025
Publisher : LPPM Universitas Bina Sarana Informatika

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31294/ijcs.v4i2.8984

Abstract

Meningkatnya prevalensi transaksi digital telah menyebabkan lonjakan penipuan kartu kredit, yang memerlukan metode deteksi canggih yang menyeimbangkan akurasi dan efisiensi komputasi. Studi penelitian mengusulkan sistem deteksi penipuan yang dioptimalkan menggunakan Logistic Regression (LR) dengan Particle Swarm Optimization (PSO). Peran untuk mengatasi tantangan ketidakseimbangan kelas dan data berdimensi tinggi, kerangka kerja tersebut menggabungkan Teknik Oversampling Minoritas Sintetis (SMOTE) untuk penyeimbangan data, RobustScaler untuk normalisasi yang tahan terhadap outlier, dan Analisis Komponen Utama (PCA) untuk pengurangan dimensionalitas. Algoritma PSO mengoptimalkan parameter LR (C), meningkatkan generalisasi model dan kinerja deteksi. Eksperimen dilakukan pada kumpulan data Credit Card yang berisi 284.807 transaksi, dengan kasus penipuan hanya mewakili 0,172% dari data ketidakseimbangan kelas yang parah. Model yang diusulkan mencapai akurasi 97,47%, presisi 99,82%, recall 89% (kelas penipuan), dan skor ROC-AUC 0,97, yang menunjukkan kinerja yang unggul dalam membedakan transaksi penipuan. Matriks kebingungan mengungkapkan 110 positif benar (deteksi penipuan yang benar) dengan hanya 13 negatif palsu, yang menunjukkan identifikasi penipuan yang kuat sekaligus meminimalkan alarm palsu. Analisis komparatif di berbagai pemisahan pengujian mengonfirmasi konsistensi model, dengan F1-Score secara konsisten di atas 98,5%. Hasil tersebut menyoroti efektivitas penyetelan hiperparameter berbasis PSO dalam meningkatkan kinerja LR, khususnya dalam kumpulan data yang tidak seimbang. Integrasi SMOTE dan PCA memastikan efisiensi komputasi tanpa mengorbankan kemampuan deteksi. Pendekatan memberi solusi yang dapat diskalakan dan presisi tinggi untuk deteksi penipuan waktu nyata, mengurangi kerugian finansial sekaligus mempertahankan efisiensi operasional. 
The effectiveness of using RFID and IoT in digital transformation processes in garment companies using the UTAUT model2 Sentoso, Thedjo; Kusrini, Kusrini; Hanafi, Hanafi
Gema Wiralodra Vol. 14 No. 2 (2023): gema wiralodra
Publisher : Universitas Wiralodra

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31943/gw.v14i2.511

Abstract

This study aims to analyze the effectiveness of using RFID and IoT in the digital transformation process in a garment company using the UTAUT2 model. This research is necessary because it can influence the intentions and behavior of its users to increase production effectiveness. A quantitative approach uses the survey method used in this study to achieve the research objectives. The number of respondents in this study was 193 employees who worked in the preparation area. The data collected from the questionnaire results were analyzed using inferential statistics. The study results show that employee acceptance of using RFID and IoT in the digital transformation process gets a positive response. Each variable average value used is in the value range 3.79 – 4.44 (scale 1 to 5). In addition, it was found that Performance Expectancy, Effort Expectation, and Price Value positively influenced Behavioral Intention. In contrast, Habit and Behavioral Intention positively influenced Use Behavioral. As for the Social Influence and Hedonic Motivation variables on Behavioral Intentions and the Facilitating Conditions variable on Usage Behavior, no positive effect was found.
ANALYSIS OF PUBLIC OPINION ON INDONESIAN TELEVISION SHOWS USING SUPPORT VECTOR MACHINE Farasalsabila, Fidya; Utami, Ema; Hanafi, Muhammad
JURTEKSI (jurnal Teknologi dan Sistem Informasi) Vol. 10 No. 2 (2024): Maret 2024
Publisher : Lembaga Penelitian dan Pengabdian Kepada Masyarakat (LPPM) STMIK Royal Kisaran

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33330/jurteksi.v10i2.2935

Abstract

Abstract: There are a great number of academics that are now conducting research on sentiment analysis by employing supervised and machine learning techniques. The research can be carried out with the assistance of a variety of sources, including reviews of movies, reviews of Twitter, reviews of online products, blogs, discussion forums, and other social networks. With the progress of technology, individuals may now effortlessly utilize social media platforms to access and share information, as well as express their viewpoints to the general public, without any constraints of distance or time. Twitter is a social media network that serves as a repository for opinions. Diverse techniques are employed to provide optimal and realistically precise pressure detection. The analysis and discussion affirm that the Support Vector Machine (SVM) was effectively employed in this study, utilizing public opinion data on television program reviews in Indonesia. An SVM classifier is employed to examine the Twitter data set by utilizing various parameters. The study successfully completed the preprocessing process by collecting a total of 400 data points, consisting of 320 reviews from 4 television shows for training data and 80 reviews for testing. The data was filtered and classified using SVM, with 200 positive and 200 negative data points for comparison. The experiment utilized the SVM method using TF-IDF to achieve the most accurate test results. The test accuracy was 80%, while the training data accuracy reached 100%.            Keywords: Sentiment Analysis; Support Vector Machine; Television Shows Review, TF-IDF,  Abstrak: Saat ini, banyak akademisi sedang menyelidiki analisis sentimen melalui pemanfaatan teknik yang diawasi dan pembelajaran mesin. Kajian dapat dilakukan dengan menggunakan beberapa sumber seperti review film, review Twitter, review produk online, blog, forum diskusi, atau jejaring sosial lainnya. Dengan kemajuan teknologi, masyarakat kini dapat dengan mudah memanfaatkan platform media sosial untuk mengakses dan berbagi informasi, serta menyampaikan pandangan mereka kepada masyarakat umum, tanpa batasan jarak dan waktu. Twitter adalah jaringan media sosial yang berfungsi sebagai gudang opini. Beragam teknik digunakan untuk menghasilkan deteksi tekanan yang optimal dan presisi secara realistis. Analisis dan pembahasan menegaskan bahwa Support Vector Machine (SVM) efektif digunakan dalam penelitian ini, memanfaatkan data opini publik tentang review program televisi di Indonesia. Pengklasifikasi SVM digunakan untuk memeriksa kumpulan data Twitter dengan memanfaatkan berbagai parameter. Penelitian berhasil menyelesaikan proses preprocessing dengan mengumpulkan total 400 titik data yang terdiri dari 320 review dari 4 acara televisi untuk data pelatihan dan 80 review untuk pengujian. Data disaring dan diklasifikasikan menggunakan SVM, dengan 200 titik data positif dan 200 titik data negatif sebagai perbandingan. Percobaan ini menggunakan metode SVM dengan menggunakan TF-IDF untuk mencapai hasil pengujian yang paling akurat. Akurasi pengujiannya mencapai 80%, sedangkan akurasi data pelatihan mencapai 100%. Kata kunci: Analisis Sentimen, Review Tayangan Televisi, TF-IDF,  Support Vector Machine
Perbandingan Naive Bayes dan Random Forest untuk Prediksi Perilaku Peserta Program Rujuk Balik Djatmiko, Widdi; Kusrini; Hanafi
JURNAL FASILKOM Vol. 13 No. 3 (2023): Jurnal FASILKOM (teknologi inFormASi dan ILmu KOMputer)
Publisher : Unversitas Muhammadiyah Riau

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.37859/jf.v13i3.6070

Abstract

Program Rujuk Balik (PRB) merupakan pelayanan yang diberikan kepada Peserta BPJS Kesehatan yang memiliki riwayat penyakit kronis antara lain penyakit diabetes melitus, hipertensi, jantung, asma, Penyakit Paru Obstruktif Kronis (PPOK), epilepsy, stroke, schizophrenia, systemic lupus erythematosus. Peserta yang mengikuti Program Rujuk Balik (PRB) telah memiliki kondisi kesehatan yang sudah stabil/terkontrol namun masih memerlukan pengobatan. Peserta yang terdaftar dalam Program Rujuk Balik (PRB) diwajibkan untuk melakukan kunjungan rutin (aktif) setiap bulan ke Fasilitas Kesehatan Tingkat Pertama (FKTP) sesuai dengan yang dipilih oleh Peserta. Namun masih didapati sebagian Peserta masih memiliki perilaku tidak bersedia melakukan kunjungan rutin (pasif) dikarenakan Peserta masih merasa dalam kondisi sehat dan tidak dalam keadaan sedang sakit. Tujuan dalam penelitian ini akan melakukan prediksi perilaku Peserta Program Rujuk Balik (PRB) dan akan membandingkan performa dari algoritma Naive Bayes dan Random Forest. Dataset yang digunakan bersifat private data yang diambil dari BPJS Kesehatan Kantor Cabang Balikpapan. Pada penelitian ini dilakukan variasi split data antara lain 70:30, 80:20, 90:10. Hasil akhir dari penelitian ini yaitu algoritma Random Forest memiliki yang lebih baik dibandingkan dengan Naïve Bayes dengan nilai tertinggi pada teknik split data 70:30 pada nilai accuracy sebesar 97,52%, precision sebesar 96,56%, recall sebesar 98,71%.
Towards Interpretable Intrusion Detection: A Double-Layer GRU with Feature Fusion Explained by SHAP and LIME Wijaya, Mochamad Rozikul; M. Hanafi
Informatik : Jurnal Ilmu Komputer Vol 21 No 3 (2025): Desember 2025
Publisher : Fakultas Ilmu Komputer

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52958/iftk.v21i3.12187

Abstract

Computer network security has become increasingly important with the growing complexity of cyberattacks. Deep learning-based Intrusion Detection Systems (IDS) represent a potential solution due to their capability to capture sequential patterns in network traffic. This study proposes a Double-Layer GRU-based IDS with Feature Fusion to enhance the representation of both numerical and categorical data in the NSL-KDD dataset. The training process employs systematic preprocessing techniques, including normalization and one-hot encoding. Experimental results demonstrate high accuracy and generalization with stable performance on both training and testing data, as well as competitive macro F1-scores for multi-class attack detection. Furthermore, interpretability aspects are explored through Explainable Artificial Intelligence (XAI) methods using SHAP and LIME. SHAP provides global insights into the contributions of important features, while LIME explains the influence of features at the local level for individual predictions. The integration of both methods not only enhances transparency and trust in the IDS but also offers deeper insights into dominant attributes in detecting attack patterns. Accordingly, this study contributes to the development of IDS that are accurate, interpretable, and applicable to modern network security.