Claim Missing Document
Check
Articles

Found 32 Documents
Search

Image Encryption using Half-Inverted Cascading Chaos Cipheration Setiadi, De Rosal Ignatius Moses; Robet, Robet; Pribadi, Octara; Widiono, Suyud; Sarker, Md Kamruzzaman
Journal of Computing Theories and Applications Vol. 1 No. 2 (2023): JCTA 1(2) 2023
Publisher : Universitas Dian Nuswantoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33633/jcta.v1i2.9388

Abstract

This research introduces an image encryption scheme combining several permutations and substitution-based chaotic techniques, such as Arnold Chaotic Map, 2D-SLMM, 2D-LICM, and 1D-MLM. The proposed method is called Half-Inverted Cascading Chaos Cipheration (HIC3), designed to increase digital image security and confidentiality. The main problem solved is the image's degree of confusion and diffusion. Extensive testing included chi-square analysis, information entropy, NCPCR, UACI, adjacent pixel correlation, key sensitivity and space analysis, NIST randomness testing, robustness testing, and visual analysis. The results show that HIC3 effectively protects digital images from various attacks and maintains their integrity. Thus, this method successfully achieves its goal of increasing security in digital image encryption
Implementation of Deep Learning Model for Classification of Household Trash Image Robet, Robet; Perangin Angin, Johanes Terang Kita; Pribadi, Octara
Sinkron : jurnal dan penelitian teknik informatika Vol. 8 No. 4 (2024): Article Research Volume 8 Issue 4, October 2024
Publisher : Politeknik Ganesha Medan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33395/sinkron.v8i4.14198

Abstract

The problem of household waste management is a very important issue today, where the rapid urbanization, consumptive culture, and the tendency to dispose of waste without sorting it first from home, makes the volume of waste in landfills increase. Therefore, household waste management needs to be managed quickly and appropriately, so as not to have a major impact on environmental, hygiene, and health problems. Although some environmental communities and local governments have made efforts to manage waste through recycling systems, the long-term use of human labor is inefficient, expensive, and harmful to workers' health. Therefore, utilizing artificial intelligence technology is the best solution to classify waste types quickly and accurately. This research tries to test several pre-trained convolutional neural network (CNN) models to perform classification. The results of testing pre-trained CNN models, such as AlexNet, VGG16, VGG19, ResNet50, and ResNeXt50, found that the pre-trained model ResNext50 is better with 100% accuracy, while the training loss and validation loss are 0.0414 and 0.0304, respectively. Then the second best model is the pre-trained ResNet50 model with 100% accuracy with training loss and validation loss of 0.0832 and 0.1077, respectively.
Akurasi K-Means dengan Menggunakan Cluster dan Titik Grid Terbaik pada Pemetaan Grid Interatif K-Means Perangin Angin, Johanes Terang Kita; Rizkita, Ari; Robet, Robet; Pribadi, Octara
METHOMIKA: Jurnal Manajemen Informatika & Komputerisasi Akuntansi Vol. 9 No. 1 (2025): METHOMIKA: Jurnal Manajemen Informatika & Komputersisasi Akuntansi
Publisher : Universitas Methodist Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.46880/jmika.Vol9No1.pp127-129

Abstract

Traditional K-Means face 2 (two) main problems, namely: Determination of Initial Centroid and poor initial cluster. Determining the initial centroid using random numbers is one of the main problems in classical K-Means which results in low accuracy and long computation time. Likewise, determining the good centroid of each cluster without being accompanied by a process of paying attention to the performance of each cluster can also cause the accuracy value obtained is not good. This study will contribute to how the performance obtained by determining a good initial centroid is combined with the use of a good cluster. Determination of a good initial centroid is done by using the K-Means Grid Mapping which divides the determination of the centroid into several Grid Points. The result of this research is a combination of Iterative K-Means with Grid Mapping K-Means to become Iterative Grid Mapping K-Means which will get a good initial centroid and also a good cluster shown in the table of iris and abalone, comparison of the variables in the iris and abalone affecting the best cluster as a result.
Integrating Hybrid Statistical and Unsupervised LSTM-Guided Feature Extraction for Breast Cancer Detection Setiadi, De Rosal Ignatius Moses; Ojugo, Arnold Adimabua; Pribadi, Octara; Kartikadarma , Etika; Setyoko, Bimo Haryo; Widiono, Suyud; Robet, Robet; Aghaunor, Tabitha Chukwudi; Ugbotu, Eferhire Valentine
Journal of Computing Theories and Applications Vol. 2 No. 4 (2025): JCTA 2(4) 2025
Publisher : Universitas Dian Nuswantoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62411/jcta.12698

Abstract

Breast cancer is the most prevalent cancer among women worldwide, requiring early and accurate diagnosis to reduce mortality. This study proposes a hybrid classification pipeline that integrates Hybrid Statistical Feature Selection (HSFS) with unsupervised LSTM-guided feature extraction for breast cancer detection using the Wisconsin Diagnostic Breast Cancer (WDBC) dataset. Initially, 20 features were selected using HSFS based on Mutual Information, Chi-square, and Pearson Correlation. To address class imbalance, the training set was balanced using the Synthetic Minority Over-sampling Technique (SMOTE). Subsequently, an LSTM encoder extracted non-linear latent features from the selected features. A fusion strategy was applied by concatenating the statistical and latent features, followed by re-selection of the top 30 features. The final classification was performed using a Support Vector Machine (SVM) with RBF kernel and evaluated using 5-fold cross-validation and a held-out test set. Experimental results showed that the proposed method achieved an average training accuracy of 98.13%, F1-score of 98.13%, and AUC-ROC of 99.55%. On the held-out test set, the model reached an accuracy of 99.30%, precision of 100%, and F1-score of 99.05%, with an AUC-ROC of 0.9973. The proposed pipeline demonstrates improved generalization and interpretability compared to existing methods such as LightGBM-PSO, DHH-GRU, and ensemble deep networks. These results highlight the effectiveness of combining statistical selection and LSTM-based latent feature encoding in a balanced classification framework.
Aplikasi Pencarian Bengkel Tambal Ban Dan Spbu Terdekat Di Kota Medan Menggunakan Metode Dijkstra Dan Haversine Berbasis Android Robet, Robet
Jurnal TIMES Vol 10 No 1 (2021): Jurnal TIMES
Publisher : STMIK TIME

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (659.225 KB) | DOI: 10.51351/jtm.10.1.2021633

Abstract

Sarana transportasi yang paling banyak dimiliki dan digunakan oleh masyarakat kota Medan pada saat ini adalah sepeda motor. Walaupun saat ini peningkatan jumlah sepeda motor yang pesat menimbulkan berbagai permasalahan dari sisi ekonomi, sosial dan lingkungan. Namun di sisi lain, sepeda motor memiliki harga yang mudah dijangkau oleh masyarakat dalam menunjang berbagai aktivitas bisnis, pekerjaan, pendidikan untuk berpergian dari satu tempat ke tempat yang lain. Dengan kemudahan mobilitas yang tinggi dan jangkauan akses ke suatu tempat, namun ada kekurangan dari sepeda motor yaitu mudahnya terjadi kebocoran ban serta mudahnya kehabisan bahan bakar minyak dikarenakan umumnya tangki minyak sepeda motor yang kapasitasnya kecil, sehingga pengendara sepeda motor terpaksa harus berhenti dan mendorong motornya. Fakta di lapangan para pengendara sepeda motor yang mengalami kebocoran ban dan kehabisan bahan bakar minyak, untuk mencari lokasi tambal ban ataupun SPBU seringkali dilakukan secara konvensional. Namun yang menjadi permasalahan, karena luasnya kota Medan membuat kesulitan bagi pengendara sepeda motor untuk menelusuri satu per satu lokasi bengkel tambal ban ataupun SPBU apalagi ketika malam hari tidak banyak kedua tempat tersebut buka. Memang saat ini pencarian tempat tambal ban dan SPBU sudah bisa dilakukan melalui aplikasi Google Map, namun kekurangannya adalah aplikasi Google Map belum dapat memberikan rekomendasi bengkel tambal ban dan SPBU terdekat dari posisi pengendara sepeda motor. Oleh sebab itu maka perlu dilakukan penelitian untuk membangun aplikasi pencarian bengkel tambal ban dan SPBU berbasis Android dengan penerapan metode Dijkstra dan Haversine dalam memberikan rekomendasi lokasi terdekat bengkel tambal ban dan SPBU di kota Medan.
PERANCANGAN WEBSITE E-COMMERCE MULTI CABANG PADA PT. PASAR SWALAYAN MAJU BERSAMA MENGGUNAKAN ALGORITMA JACCARD COEFFICIENT Andy, Andy; Agus Maringan Siahaan; Satriya Miharja; Robet, Robet; Didik Aryanto
Majalah Ilmiah METHODA Vol. 14 No. 1 (2024): Majalah Ilmiah METHODA
Publisher : Universitas Methodist Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.46880/methoda.Vol14No1.pp25-32

Abstract

PT. Pasar Swalayan Maju Bersama is a company engaged in the supermarket sector and has 3 branches, namely Maju Bersama Glugur, Maju Bersama Merak Jingga, and Maju Bersama Marendal. But in practice, PT. Maju Bersama Supermarkets still have not utilized good marketing media, both internally and externally. On the internal side, the company has not been able to properly integrate the sales processes of its three branches. In addition, the main problem is related to the amount of transaction data stored in the company's storage. Transaction data recorded every day will certainly burden storage if it is not used properly to become useful knowledge for the company. From the description of the problem, it is necessary to develop a multi-branch based system that is implemented on an E-Commerce website. This research also implements the Jaccard Coefficient algorithm so that it can process company data which turns a lot of knowledge into product recommendations for customers. The results of the study show that the Jaccard Coefficient algorithm is proven capable of processing company data into knowledge in the form of product recommendations that are relevant to customers.
Attention Augmented Deep Learning Model for Enhanced Feature Extraction in Cacao Disease Recognition Robet, Robet; Perangin Angin, Johanes Terang Kita; Siregar, Tarq Hilmar
Sinkron : jurnal dan penelitian teknik informatika Vol. 9 No. 4 (2025): Articles Research October 2025
Publisher : Politeknik Ganesha Medan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33395/sinkron.v9i4.15249

Abstract

Accurate cacao disease recognition is critical for safeguarding yields and reducing losses. Prior cacao studies primarily rely on handcrafted descriptors (eg, Color Histogram, LBP, GLCM) or standard CNN/transfer-learning pipelines, often limited to ≤ 3 classes and a single plant organ; explicit channel-spatial attention and comprehensive multiclass evaluation remain uncommon. To the best of our knowledge, no prior work integrates Squeeze-and-Excitation (SE) and the Convolutional Block Attention Module (CBAM) on a ResNeXt50 backbone for six-class cacao disease classification, accompanied by a standardized ablation study and t-SNE-based interpretability. We propose a six-class classifier (five diseases + healthy) built on ResNeXt-50 enhanced with SE (channel recalibration) and CBAM (channel-spatial emphasis) to highlight lesion-relevant cues. The dataset comprises labeled leaf and pod images from public sources collected under field-like conditions; preprocessing includes resizing to 224x224, normalization, and augmentation (flips, small rotations, color jitter, random resized crops). Trained with Adam and early stopping, ResNeXt50+SE+CBAM attains 97% test accuracy and 0.97 macro-F1, surpassing a ResNeXt50 baseline of 94% and 0.95 and SE-only/CBAM-only variants. Confusion matrix and t-SNE analyses show fewer mix-ups among visual classes and clearer separability, while the ablation validates complementary benefits of SE and CBAM. On a desktop-hosted, web-based setup, batch-1 inference at 224x224 is 7.46 ms/image (134 FPS), demonstrating real-time capability. The findings support deployment as browser-based decision-support tools for farmers and integration into continuous field-monitoring systems.
Implementasi Chatbot Otomatis Akademik Berbasis Web Menggunakan LLM dan Rule-Based System Studi Kasus: STMIK Time Alvin, Alvin; Robet, Robet; Tarigan, Feriani Astuti
JURNAL INFORMATIKA DAN KOMPUTER Vol 9, No 3 (2025): Oktober 2025
Publisher : Lembaga Penelitian dan Pengabdian Masyarakat - Universitas Teknologi Digital Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26798/jiko.v9i3.2209

Abstract

Perkembangan teknologi kecerdasan buatan (Artificial Intelligence/AI) telah mendorong peningkatan efisiensi layanan informasi melalui penerapan chatbot yang mampu berinteraksi secara alami dengan pengguna. Chatbot merupakan implementasi teknologi berbasis pemrosesan bahasa alami (Natural Language Processing/NLP) yang dirancang untuk memahami dan merespons percakapan manusia. Penelitian ini bertujuan mengembangkan chatbot otomatis layanan akademik STMIK Time berbasis web dengan mengintegrasikan dua pendekatan utama, yaitu Rule-Based System dan Large Language Model (LLM). Metode penelitian meliputi tahap perancangan sistem menggunakan Flowise AI sebagai platform automation workflow, serta pengujian performa sistem melalui dua tahap, yaitu pengujian black box untuk menilai fungsionalitas dan pengujian user experience untuk mengukur persepsi pengguna terhadap kecepatan, akurasi, dan kepuasan. Hasil pengujian menunjukkan bahwa chatbot berfungsi dengan baik dengan capaian kecepatan respons sebesar 51,5%, tingkat akurasi 54,5%, dan tingkat kepuasan pengguna 60,6%. Hasil ini menegaskan bahwa sistem mampu memberikan layanan informasi akademik secara efektif, meskipun aspek kecepatan dan responsivitas masih perlu ditingkatkan. Kesimpulannya, Rule-Based System lebih sesuai untuk percakapan terstruktur, sedangkan Large Language Model (LLM) lebih efektif untuk konteks dinamis. Penelitian ini diharapkan menjadi dasar pengembangan layanan akademik berbasis AI yang lebih adaptif dan responsif di perguruan tinggi.
Comparative Analysis of DNA Sequence Alignment Algorithms in SARS-CoV-2 Edi, Edi; Robet, Robet; Harahap, Nurhayati
Sinkron : jurnal dan penelitian teknik informatika Vol. 9 No. 4 (2025): Articles Research October 2025
Publisher : Politeknik Ganesha Medan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33395/sinkron.v9i4.15323

Abstract

Sequence alignment is fundamental in bioinformatics, with Smith-Waterman (local) and Needleman-Wunsch (global) algorithms widely applied. However, comparative analyses on highly similar viral genomes such as SARS-CoV-2 remain scarce. This study systematically evaluated both algorithms using the first 5,000 nucleotides of two SARS-CoV-2 genomes (29,903 and 29,684 nt) under four parameter configurations: standard, low gap penalty, high gap penalty, and high match reward. Performance was assessed through alignment score, sequence identity, gap distribution, execution time, and parameter sensitivity. Both algorithms produced identical sequence identity (97.80%), with 4,943 matches out of 5,054 positions. Smith-Waterman consistently yielded higher alignment scores (12.6-112 points advantage), while Needleman-Wunsch was substantially faster (0.7752 vs 3.9014 s), showing 5.03 times greater computational efficiency. These findings indicate that both methods are reliable for highly similar viral sequences, with a trade-off between scoring precision and computational speed. This study provides the first parameter-sensitive comparison for full SARS-CoV02 genomes, emphasizing how parameter tuning can influence performance outcomes. A key limitation is that the analysis was restricted to the first 5,000 nucleotides, which may not capture variability across the complete genome.
Akurasi K-Means dengan Menggunakan Cluster dan Titik Grid Terbaik pada Pemetaan Grid Interatif K-Means Perangin Angin, Johanes Terang Kita; Rizkita, Ari; Robet, Robet; Pribadi, Octara
METHOMIKA: Jurnal Manajemen Informatika & Komputerisasi Akuntansi Vol. 9 No. 1 (2025): METHOMIKA: Jurnal Manajemen Informatika & Komputersisasi Akuntansi
Publisher : Universitas Methodist Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.46880/jmika.Vol9No1.pp127-129

Abstract

Traditional K-Means face 2 (two) main problems, namely: Determination of Initial Centroid and poor initial cluster. Determining the initial centroid using random numbers is one of the main problems in classical K-Means which results in low accuracy and long computation time. Likewise, determining the good centroid of each cluster without being accompanied by a process of paying attention to the performance of each cluster can also cause the accuracy value obtained is not good. This study will contribute to how the performance obtained by determining a good initial centroid is combined with the use of a good cluster. Determination of a good initial centroid is done by using the K-Means Grid Mapping which divides the determination of the centroid into several Grid Points. The result of this research is a combination of Iterative K-Means with Grid Mapping K-Means to become Iterative Grid Mapping K-Means which will get a good initial centroid and also a good cluster shown in the table of iris and abalone, comparison of the variables in the iris and abalone affecting the best cluster as a result.