Claim Missing Document
Check
Articles

PERANCANGAN WEBSITE E-COMMERCE MULTI CABANG PADA PT. PASAR SWALAYAN MAJU BERSAMA MENGGUNAKAN ALGORITMA JACCARD COEFFICIENT Andy, Andy; Agus Maringan Siahaan; Satriya Miharja; Robet, Robet; Didik Aryanto
Majalah Ilmiah METHODA Vol. 14 No. 1 (2024): Majalah Ilmiah METHODA
Publisher : Universitas Methodist Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.46880/methoda.Vol14No1.pp25-32

Abstract

PT. Pasar Swalayan Maju Bersama is a company engaged in the supermarket sector and has 3 branches, namely Maju Bersama Glugur, Maju Bersama Merak Jingga, and Maju Bersama Marendal. But in practice, PT. Maju Bersama Supermarkets still have not utilized good marketing media, both internally and externally. On the internal side, the company has not been able to properly integrate the sales processes of its three branches. In addition, the main problem is related to the amount of transaction data stored in the company's storage. Transaction data recorded every day will certainly burden storage if it is not used properly to become useful knowledge for the company. From the description of the problem, it is necessary to develop a multi-branch based system that is implemented on an E-Commerce website. This research also implements the Jaccard Coefficient algorithm so that it can process company data which turns a lot of knowledge into product recommendations for customers. The results of the study show that the Jaccard Coefficient algorithm is proven capable of processing company data into knowledge in the form of product recommendations that are relevant to customers.
Attention Augmented Deep Learning Model for Enhanced Feature Extraction in Cacao Disease Recognition Robet, Robet; Perangin Angin, Johanes Terang Kita; Siregar, Tarq Hilmar
Sinkron : jurnal dan penelitian teknik informatika Vol. 9 No. 4 (2025): Articles Research October 2025
Publisher : Politeknik Ganesha Medan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33395/sinkron.v9i4.15249

Abstract

Accurate cacao disease recognition is critical for safeguarding yields and reducing losses. Prior cacao studies primarily rely on handcrafted descriptors (eg, Color Histogram, LBP, GLCM) or standard CNN/transfer-learning pipelines, often limited to ≤ 3 classes and a single plant organ; explicit channel-spatial attention and comprehensive multiclass evaluation remain uncommon. To the best of our knowledge, no prior work integrates Squeeze-and-Excitation (SE) and the Convolutional Block Attention Module (CBAM) on a ResNeXt50 backbone for six-class cacao disease classification, accompanied by a standardized ablation study and t-SNE-based interpretability. We propose a six-class classifier (five diseases + healthy) built on ResNeXt-50 enhanced with SE (channel recalibration) and CBAM (channel-spatial emphasis) to highlight lesion-relevant cues. The dataset comprises labeled leaf and pod images from public sources collected under field-like conditions; preprocessing includes resizing to 224x224, normalization, and augmentation (flips, small rotations, color jitter, random resized crops). Trained with Adam and early stopping, ResNeXt50+SE+CBAM attains 97% test accuracy and 0.97 macro-F1, surpassing a ResNeXt50 baseline of 94% and 0.95 and SE-only/CBAM-only variants. Confusion matrix and t-SNE analyses show fewer mix-ups among visual classes and clearer separability, while the ablation validates complementary benefits of SE and CBAM. On a desktop-hosted, web-based setup, batch-1 inference at 224x224 is 7.46 ms/image (134 FPS), demonstrating real-time capability. The findings support deployment as browser-based decision-support tools for farmers and integration into continuous field-monitoring systems.
Implementasi Chatbot Otomatis Akademik Berbasis Web Menggunakan LLM dan Rule-Based System Studi Kasus: STMIK Time Alvin, Alvin; Robet, Robet; Tarigan, Feriani Astuti
JURNAL INFORMATIKA DAN KOMPUTER Vol 9, No 3 (2025): Oktober 2025
Publisher : Lembaga Penelitian dan Pengabdian Masyarakat - Universitas Teknologi Digital Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26798/jiko.v9i3.2209

Abstract

Perkembangan teknologi kecerdasan buatan (Artificial Intelligence/AI) telah mendorong peningkatan efisiensi layanan informasi melalui penerapan chatbot yang mampu berinteraksi secara alami dengan pengguna. Chatbot merupakan implementasi teknologi berbasis pemrosesan bahasa alami (Natural Language Processing/NLP) yang dirancang untuk memahami dan merespons percakapan manusia. Penelitian ini bertujuan mengembangkan chatbot otomatis layanan akademik STMIK Time berbasis web dengan mengintegrasikan dua pendekatan utama, yaitu Rule-Based System dan Large Language Model (LLM). Metode penelitian meliputi tahap perancangan sistem menggunakan Flowise AI sebagai platform automation workflow, serta pengujian performa sistem melalui dua tahap, yaitu pengujian black box untuk menilai fungsionalitas dan pengujian user experience untuk mengukur persepsi pengguna terhadap kecepatan, akurasi, dan kepuasan. Hasil pengujian menunjukkan bahwa chatbot berfungsi dengan baik dengan capaian kecepatan respons sebesar 51,5%, tingkat akurasi 54,5%, dan tingkat kepuasan pengguna 60,6%. Hasil ini menegaskan bahwa sistem mampu memberikan layanan informasi akademik secara efektif, meskipun aspek kecepatan dan responsivitas masih perlu ditingkatkan. Kesimpulannya, Rule-Based System lebih sesuai untuk percakapan terstruktur, sedangkan Large Language Model (LLM) lebih efektif untuk konteks dinamis. Penelitian ini diharapkan menjadi dasar pengembangan layanan akademik berbasis AI yang lebih adaptif dan responsif di perguruan tinggi.
Comparative Analysis of DNA Sequence Alignment Algorithms in SARS-CoV-2 Edi, Edi; Robet, Robet; Harahap, Nurhayati
Sinkron : jurnal dan penelitian teknik informatika Vol. 9 No. 4 (2025): Articles Research October 2025
Publisher : Politeknik Ganesha Medan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33395/sinkron.v9i4.15323

Abstract

Sequence alignment is fundamental in bioinformatics, with Smith-Waterman (local) and Needleman-Wunsch (global) algorithms widely applied. However, comparative analyses on highly similar viral genomes such as SARS-CoV-2 remain scarce. This study systematically evaluated both algorithms using the first 5,000 nucleotides of two SARS-CoV-2 genomes (29,903 and 29,684 nt) under four parameter configurations: standard, low gap penalty, high gap penalty, and high match reward. Performance was assessed through alignment score, sequence identity, gap distribution, execution time, and parameter sensitivity. Both algorithms produced identical sequence identity (97.80%), with 4,943 matches out of 5,054 positions. Smith-Waterman consistently yielded higher alignment scores (12.6-112 points advantage), while Needleman-Wunsch was substantially faster (0.7752 vs 3.9014 s), showing 5.03 times greater computational efficiency. These findings indicate that both methods are reliable for highly similar viral sequences, with a trade-off between scoring precision and computational speed. This study provides the first parameter-sensitive comparison for full SARS-CoV02 genomes, emphasizing how parameter tuning can influence performance outcomes. A key limitation is that the analysis was restricted to the first 5,000 nucleotides, which may not capture variability across the complete genome.
Akurasi K-Means dengan Menggunakan Cluster dan Titik Grid Terbaik pada Pemetaan Grid Interatif K-Means Perangin Angin, Johanes Terang Kita; Rizkita, Ari; Robet, Robet; Pribadi, Octara
METHOMIKA: Jurnal Manajemen Informatika & Komputerisasi Akuntansi Vol. 9 No. 1 (2025): METHOMIKA: Jurnal Manajemen Informatika & Komputersisasi Akuntansi
Publisher : Universitas Methodist Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.46880/jmika.Vol9No1.pp127-129

Abstract

Traditional K-Means face 2 (two) main problems, namely: Determination of Initial Centroid and poor initial cluster. Determining the initial centroid using random numbers is one of the main problems in classical K-Means which results in low accuracy and long computation time. Likewise, determining the good centroid of each cluster without being accompanied by a process of paying attention to the performance of each cluster can also cause the accuracy value obtained is not good. This study will contribute to how the performance obtained by determining a good initial centroid is combined with the use of a good cluster. Determination of a good initial centroid is done by using the K-Means Grid Mapping which divides the determination of the centroid into several Grid Points. The result of this research is a combination of Iterative K-Means with Grid Mapping K-Means to become Iterative Grid Mapping K-Means which will get a good initial centroid and also a good cluster shown in the table of iris and abalone, comparison of the variables in the iris and abalone affecting the best cluster as a result.
A Comparative Study of Machine Learning and Deep Learning Models for Heart Disease Classification Simanjuntak, Martina Sances; Robet, Robet; Hoki, Leony
Journal of Applied Informatics and Computing Vol. 9 No. 6 (2025): December 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i6.11546

Abstract

Heart disease remains one of the leading causes of mortality worldwide, necessitating accurate early detection. This study aims to compare the performance of several Machine Learning (ML) and Deep Learning (DL) algorithms in heart disease classification using the Heart Disease dataset with 918 samples. The methods tested included Naïve Bayes, Decision Tree, Random Forest, Support Vector Machine (SVM), Logistic Regression, K-Nearest Neighbor (KNN), and Deep Neural Network (DNN). Preprocessing included feature normalization, data splitting (80:20), and simple hyperparameter tuning for parameter-sensitive models. Evaluations were conducted using accuracy, precision, recall, F1-score, AUC, and confusion matrix analysis to identify error patterns. The results showed that SVM and DNN achieved the highest accuracies of 91.3% and 92.1%, respectively. However, DNN has higher computational costs and risks of overfitting on small datasets. These findings confirm that traditional ML models such as SVM remain highly competitive on tabular medical data.
SECURE DOCUMENT NOTARIZATION: A BLOCKCHAIN-BASED DIGITAL SIGNATURE VERIFICATION SYSTEM Tio, Nicholas; Pribadi, Octara; Robet, Robet
JIKO (Jurnal Informatika dan Komputer) Vol 8, No 3 (2025)
Publisher : Universitas Khairun

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33387/jiko.v8i3.10811

Abstract

The increasing need for trustworthy digital document verification presents challenges in ensuring authenticity, transparency, and tamper resistance without relying on centralized authorities. This study aims to develop and evaluate a decentralized document notarization system using Ethereum and IPFS that offers secure, transparent, and cost-efficient verification. The system employs modular smart contracts deployed through a factory pattern to create user-specific verifier instances, enabling document submission, revocation, and verification using keccak-256 hashes, ECDSA signatures, and IPFS content identifiers. Methods include contract development, deployment on a local Hardhat network, performance benchmarking, and front-end integration for user interaction. Results show that verifier deployment consumes approximately 1.19 million gas (≈$85 at 20 gwei), document submission around 85 thousand gas (≈$6), and revocation about 50 thousand gas (≈$3.50). Client-side operations such as hashing and IPFS pinning occur in under 50 milliseconds, while real-world blockchain confirmations take 10–30 seconds. The findings demonstrate that decentralized notarization using Ethereum and IPFS is both technically feasible and economically viable. Future enhancements, including Layer 2 rollups, batch notarization, and privacy-preserving features such as encrypted IPFS pinning or zero-knowledge proofs, are proposed to further improve scalability, cost-efficiency, and data confidentiality
Performance Analysis of Machine Learning Model Combination for Spaceship Titanic Classification using Voting Classifier Wirawan, Haria; Robet, Robet; Hendrik, Jackri
JIKO (Jurnal Informatika dan Komputer) Vol 8, No 3 (2025)
Publisher : Universitas Khairun

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33387/jiko.v8i3.10866

Abstract

The Spaceship Titanic dataset is fictional yet complex and challenging, featuring a mix of numerical and categorical features and missing values. This study aims to evaluate the performance of three machine learning model scenarios for classifying passenger status as “Transported” or “not”. The three scenarios implemented include linear-like models, a combination of the Top 5 Diverse models, and tree-based/ensemble models, each using a voting classifier approach. The voting model is employed because it can combine the strengths of multiple algorithms to reduce bias and variance, thus improving overall prediction accuracy and stability. The voting mechanism aggregates predictions from several base classifiers using two strategies: hard voting, which selects the majority class, and soft voting, which averages the predicted probabilities across models. The dataset was obtained from Kaggle and processed through several stages: data preprocessing, data splitting, model training, and evaluation. The evaluation results show that the tree-based/ensemble scenario achieved the highest accuracy of 90.38%, followed by the Top 5 Diverse model combination at 87.31% and the Linear-like model at 76.51%. Visualization using the confusion matrix, ROC Curve, and Feature importance analysis further supports the claim that ensemble models are superior at detecting complex classification patterns. These findings suggest that tree-based ensemble models provide the most optimal approach for classification tasks on a dataset like Spaceship Titanic.
Comparative Analysis of Loss Functions for Predicting Autoimmunity from Molecular Descriptors Using Deep Learning Gunawan, Candra; Robet, Robet; Hendri, Hendri
Building of Informatics, Technology and Science (BITS) Vol 7 No 3 (2025): December 2025
Publisher : Forum Kerjasama Pendidikan Tinggi

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47065/bits.v7i3.8581

Abstract

Drug-induced autoimmunity (DIA) presents a complex obstacle in pharmacological safety due to its rare occurrence and unpredictable manifestation, often compounded by class imbalance in clinical datasets. This study investigates the influence of three loss functions, Binary Cross-Entropy (BCE), Focal Loss, and Dice Loss, on the performance of deep learning architectures comprising Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), and 2-Layer Neural Network (SimpleNN). Models were trained using numerical molecular descriptors from the publicly available DIA dataset. The architectures were chosen based on their complementary properties: MLP is suitable for high-dimensional tabular descriptor data, CNN was examined to explore whether 1D convolutions can capture localized feature interactions among correlated descriptors, and 2-Layer Neural Network served as a lightweight baseline for comparison. A stratified 5-fold cross-validation strategy was employed to ensure statistical robustness. The results demonstrate that the MLP model, optimized with Focal Loss, consistently delivered the highest classification performance, achieving average scores of 94% accuracy, 93% precision, 95% recall, 94% F1-score, and an AUC of 0.97. In contrast, CNN and SimpleNN architectures yielded less favorable outcomes under the same loss configurations. These findings highlight the importance of aligning loss function choice with model complexity in the context of imbalanced biomedical data. The insights from this work contribute to the development of more reliable computational frameworks for early-phase immunogenicity screening and support the advancement of precision pharmacovigilance strategies.
COMPARISON OF DECISION TREE AND RANDOM FOREST ALGORITHMS FOR ASTHMA Lase, Wisriani; Robet, Robet; Hendri, Hendri
JURTEKSI (jurnal Teknologi dan Sistem Informasi) Vol. 12 No. 1 (2025): Desember 2025
Publisher : Lembaga Penelitian dan Pengabdian Kepada Masyarakat (LPPM) STMIK Royal Kisaran

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33330/jurteksi.v12i1.4192

Abstract

Abstract: Asthma is a chronic respiratory disease that affects millions of people worldwide, making early detection crucial to prevent complications. This study aims to compare the performance of the Decision Tree and Random Forest algorithms in classifying asthma based on clinical symptom data. The data were processed through feature selection and model training stages, then evaluated using accuracy, precision, recall, and F1-score.The experimental analysis revealed that the Random Forest algorithm surpassed the Decision Tree in all metrics, achieving 95.19% accuracy, 90.43% precision, 95.00% recall, and 93.00% F1-score. In contrast, the Decision Tree obtained 89.14% accuracy, 90.60% precision, 88.70% recall, and 89.70% F1-score. These results suggest that Random Forest is more robust and dependable, especially in managing complex and imbalanced medical datasets. Keywords: asthma detection; decision tree; random forest; machine learning. Abstrak: Asma merupakan penyakit pernapasan kronis yang memengaruhi jutaan orang di seluruh dunia sehingga deteksi dini sangat penting untuk mencegah komplikasi. Penelitian ini bertujuan membandingkan kinerja algoritma Decision Tree dan Random Forest dalam mengklasifikasikan asma berdasarkan data gejala klinis. Data diproses melalui tahapan seleksi fitur dan pelatihan model, kemudian dievaluasi menggunakan akurasi, presisi, recall, dan F1-score. Hasil penelitian menunjukkan bahwa Random Forest memberikan performa terbaik dengan akurasi 90.43%, presisi 95.00%, recall 95.00%, dan F1-score 93.00%. Sebaliknya, Decision Tree memperoleh akurasi 89.14%, presisi 90.60%, recall 88.70%, dan F1-score 89.70%. Hasil ini menunjukkan bahwa Random Forest lebih kuat dan dapat diandalkan, terutama dalam mengelola kumpulan data medis yang kompleks dan tidak seimbang. Kata kunci: deteksi asma; decision tree; random forest; pembelajaran mesin.