cover
Contact Name
Hairani
Contact Email
matrik@universitasbumigora.ac.id
Phone
+6285933083240
Journal Mail Official
matrik@universitasbumigora.ac.id
Editorial Address
Jl. Ismail Marzuki-Cilinaya-Cakranegara-Mataram 83127
Location
Kota mataram,
Nusa tenggara barat
INDONESIA
MATRIK : Jurnal Manajemen, Teknik Informatika, dan Rekayasa Komputer
Published by Universitas Bumigora
ISSN : 18584144     EISSN : 24769843     DOI : 10.30812/matrik
Core Subject : Science,
MATRIK adalah salah satu Jurnal Ilmiah yang terdapat di Universitas Bumigora Mataram (eks STMIK Bumigora Mataram) yang dikelola dibawah Lembaga Penelitian dan Pengabadian kepada Masyarakat (LPPM). Jurnal ini bertujuan untuk memberikan wadah atau sarana publikasi bagi para dosen, peneliti dan praktisi baik di lingkungan internal maupun eksternal Universitas Bumigora Mataram. Jurnal MATRIK terbit 2 (dua) kali dalam 1 tahun pada periode Genap (Mei) dan Ganjil (Nopember).
Articles 420 Documents
Accuracy of K-Nearest Neighbors Algorithm Classification For Archiving Research Publications Muhamad Nur Gunawan; Titi Farhanah; Siti Ummi Masruroh; Ahmad Mukhlis Jundulloh; Nafdik Zaydan Raushanfikar; Rona Nisa Sofia Amriza
MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer Vol. 23 No. 3 (2024)
Publisher : Universitas Bumigora

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30812/matrik.v23i3.3915

Abstract

The Archives and Research Publication Information System plays an important role in managing academic research and scientific publications efficiently. With the increasing volume of research and publications carried out each year by university researchers, the Research Archives and Publications Information System is essential for organizing and processing these materials. However, managing large amounts of data poses challenges, including the need to accurately classify a researcher's field of study. To overcome these challenges, this research focuses on implementing the K-Nearest Neighbors classification algorithm in the Archives and Research Publications Information System application. This research aims to improve the accuracy of classification systems and facilitate better decision-making in the management of academic research. This research method is systematic involving data acquisition, pre-processing, algorithm implementation, and evaluation. The results of this research show that integrating Chi-Square feature selection significantly improves K-Nearest Neighbors performance, achieving 86% precision, 84.3% recall, 89.2% F1 Score, and 93.3% accuracy. This research contributes to increasing the efficiency of the Archives and Research Publication Information System in managing research and academic publications.
K-Means Optimization Algorithm to Improve Cluster Quality on Sparse Data Yully Sofyah Waode; Anang Kurnia; Yenni Angraini
MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer Vol. 23 No. 3 (2024)
Publisher : Universitas Bumigora

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30812/matrik.v23i3.3936

Abstract

The aim of this research is clustering sparse data using various K-Means optimization algorithms. Sparse data used in this research came from Citampi Stories game reviews on Google Play Store. This research method are Density Based Spatial Clustering of Applications with Noise-Kmeans (DB-Kmeans), Particle Swarm Optimization-Kmeans (PSO-Kmeans), and Robust Sparse Kmeans Clustering (RSKC) which are evaluated using the silhouette score. Clustering sparse data presented a challenge as it could complicate the analysis process, leading to suboptimal or non-representative results. To address this challenge, the research employed an approach that involved dividing the data based on the number of terms in three different scenarios to reduce sparsity. The results of this research showed that DB-Kmeans had the potential to enhance clustering quality across most data scenarios. Additionally, this research found that dividing data based on the number of terms could effectively mitigate sparsity, significantly influencing the optimization of topic formation within each cluster. The conclusion of this research is that this approach is effective in enhancing the quality of clustering for sparse data, providing more diverse and easily interpretable information. The results of this research could be valuable for developers seeking to understand user preferences and enhance game quality.
Characterizing Hardware Utilization on Edge Devices when Inferring Compressed Deep Learning Models Ahmad Naufal Labiib Nabhaan; Rakandhiya Daanii Rachmanto; Arief Setyanto
MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer Vol. 24 No. 1 (2024)
Publisher : Universitas Bumigora

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30812/matrik.v24i1.3938

Abstract

Implementing edge AI involves running AI algorithms near the sensors. Deep Learning (DL) Model has successfully tackled image classification tasks with remarkable performance. However, their requirements for huge computing resources hinder the implementation of edge devices. Compressing the model is an essential task to allow the implementation of the DL model on edge devices. Post-training quantization (PTQ) is a compression technique that reduces the bit representation of the model weight parameters. This study looks at the impact of memory allocation on the latency of compressed DL models on Raspberry Pi 4 Model B (RPi4B) and NVIDIA Jetson Nano (J. Nano). This research aims to understand hardware utilization in central processing units (CPU), graphics processing units (GPU),and memory. This study focused on the quantitative method, which controls memory allocation and measures warm-up time, latency, CPU, and GPU utilization. Speed comparison among inference of DL models on RPi4B and J. Nano. This paper observes the correlation between hardware utilization versus the various DL inference latencies. According to our experiment, we concluded that smaller memory allocation led to high latency on both RPi4B and J. Nano. CPU utilization on RPi4B. CPU utilization in RPi4B increases along with the memory allocation; however, the opposite is shown on J. Nano since the GPU carries out the main computation on the device. Regarding computation, thesmaller DL Size and smaller bit representation lead to faster inference (low latency), while bigger bit representation on the same DL model leads to higher latency.
Deep Learning Model Compression Techniques Performance on Edge Devices Rakandhiya Daanii Rachmanto; Ahmad Naufal Labiib Nabhaan; Arief Setyanto
MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer Vol. 23 No. 3 (2024)
Publisher : Universitas Bumigora

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30812/matrik.v23i3.3961

Abstract

Artificial intelligence at the edge can help solve complex tasks faced by various sectors such as automotive, healthcare and surveillance. However, challenged by the lack of computational power from the edge devices, artificial intelligence models are forced to adapt. Many have developed and quantified model compres-sion approaches over the years to tackle this problem. However, not many have considered the overhead of on-device model compression, even though model compression can take a considerable amount of time. With the added metric, we provide a more complete view on the efficiency of model compression on the edge. The objective of this research is identifying the benefit of compression methods and it’s tradeoff between size and latency reduction versus the accuracy loss as well as compression time in edge devices. In this work, quantitative method is used to analyze and rank three common ways of model compression: post-training quantization, unstructured pruning and knowledge distillation on the basis of accuracy, latency, model size and time to compress overhead. We concluded that knowledge distillation is the best, with potential of up to 11.4x model size reduction, and 78.67% latency speed up, with moderate loss of accura-cy and compression time.
DynamicWeighted Particle Swarm Optimization - Support Vector Machine Optimization in Recursive Feature Elimination Feature Selection: Optimization in Recursive Feature Elimination Irma Binti Sya'idah; Sugiyarto Surono; Goh Khang Wen
MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer Vol. 23 No. 3 (2024)
Publisher : Universitas Bumigora

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30812/matrik.v23i3.3963

Abstract

Feature Selection is a crucial step in data preprocessing to enhance machine learning efficiency, reduce computational complexity, and improve classification accuracy. The main challenge in feature selection for classification is identifying the most relevant and informative subset to enhance prediction accuracy. Previous studies often resulted in suboptimal subsets, leading to poor model performance and low accuracy. This research aims to enhance classification accuracy by utilizing Recursive Feature Elimination (RFE) combined with Dynamic Weighted Particle Swarm Optimization (DWPSO) and Support Vector Machine (SVM) algorithms. The research method involves the utilization of 12 datasets from the University of California, Irvine (UCI) repository, where features are selected via RFE and applied to the DWPSO-SVM algorithm. RFE iteratively removes the weakest features, constructing a model with the most relevant features to enhance accuracy. The research findings indicate that DWPSO-SVM with RFE significantly improves classification accuracy. For example, accuracy on the Breast Cancer dataset increased from 58% to 76%, and on the Heart dataset from 80% to 97%. The highest accuracy achieved was 100% on the Iris dataset. The conclusion of these findings that RFE in DWPSO-SVM offers consistent and balanced results in True Positive Rate (TPR) and True Negative Rate (TNR), providing reliable and accurate predictions for various applications.
Variation of Distributed Power Control Algorithm in Co-Tier Femtocell Network Fatur Rahman Harahap; Anggun Fitrian Isnawati; Khoirun Ni'amah
MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer Vol. 24 No. 1 (2024)
Publisher : Universitas Bumigora

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30812/matrik.v24i1.3992

Abstract

The wireless communication network has seen rapid growth, especially with the widespread use of smartphones, but resources are increasingly limited, especially indoors. Femtocell, a spectrum-efficient small cellular network solution, faces challenges in distributed power control (DPC) when deployed with distributed users, impacting power levels, and causing interference in the main network. The aim of this research is optimizing user power consumption in co-tier femtocell networks by using the user power treatment. This study proposed the Distributed Power Control (DPC) variation methods such as Distributed Constrained Power Control (DCPC), Half Distributed Constrained Power Control (HDCPC), and Generalized Distributed Constrained Power Control (GDCPC) in co-tier femtocell network. The research examines scenarios where user power converges but exceeds the maximum threshold or remains semi-feasible, considering factors like number of users, distance, channel usage, maximum power values, non-negative power vectors, Signal-to-Interference-plus-Noise Ratio (SINR), and link gain matrix values. In Distributed Power Control (DPC), distance and channel utilization affect feasibility conditions: feasible, semi-feasible, and non-feasible. The result shows that Half Distributed Constrained Power Control (HDCPC) is more effective than Distributed Constrained Power Control (DCPC) in semi-feasible conditions due to its efficient power usage and similar Signal-to-Interference-plus-Noise Ratio (SINR). Half Distributed Constrained Power Control (HDCPC) is also easier to implement than Generalized Distributed Constrained Power Control (GDCPC) as it does not require user deactivation when exceeding the maximum power limit. Distributed Power Control (DPC) variations can shift the power and Signal-to-Interference-plus-Noise Ratio (SINR) conditions from non-convergence to convergence at or below the maximum power level. We concluded that the best performance of Distributed Power Control (DPC) is Half Distributed Constrained Power Control (HDCPC).
New Method for Identification and Response to Infectious Disease Patterns Based on Comprehensive Health Service Data Desi Vinsensia; Siskawati Amri; Jonhariono Sihotang; Hengki Tamando Sihotang
MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer Vol. 23 No. 3 (2024)
Publisher : Universitas Bumigora

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30812/matrik.v23i3.4000

Abstract

Infectious diseases continue to pose a major threat to global public health and require early detection and effective response strategies. Despite advances in information technology and data analysis, the full potential of health data in identifying disease patterns and trends remains underutilised. This study aims to propose a comprehensive new mathematical model (new method) that utilises health data to identify infectious disease patterns and trends by exploring the potential of data-driven care approaches in addressing public health challenges associated with infectious diseases. The research methods used are exploratory data collection and analytical model development. The research results obtained mathematical models and algorithms that consider data of period, time, patterns, and trends of dangerous diseases, statistical analysis, and recommendations. Data visualisation and in-depth analysis were conducted in the research to improve the ability to respond to infectious disease threats and provide better decision-making solutions in improving outbreak response, as well as improving preparedness in addressing public health challenges. This research contributes to health practitioners and decision-makers.
Implementation of The Extreme Gradient Boosting Algorithm with Hyperparameter Tuning in Celiac Disease Classification Roudlotul Jannah Alfirdausy; Nurissaidah Ulinnuha; Wika Dianita Utami
MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer Vol. 24 No. 1 (2024)
Publisher : Universitas Bumigora

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30812/matrik.v24i1.4031

Abstract

Celiac Disease (CeD) is an autoimmune disorder triggered by gluten consumption and involves the immune system and HLA in the intestine. The global incidence ranges from 0.5%-1%, with only 30% correctly diagnosed. Diagnosis remains challenging, requiring complex tests like blood tests, small bowel biopsy, and elimination of gluten from the diet. Therefore, a faster and more efficient alternative is needed. Extreme Gradient Boosting (XGBoost), an ensemble machine learning technique that utilizes decision trees to aid in the classification of Celiac disease, was used. The aim of this study was to classify patients into six classes, namely potential, atypical, silent, typical, latent and none disease, based on attributes such as blood test results, clinical symptoms and medical history. This research method employs 5-fold cross-validation to optimize parameters that are max depth, n estimator, gamma, and learning rate. Experiments were conducted 96 times to get the best combination of parameters. The results of this research are highlighted by an improvement of 0.45% above the accuracy value with the default XGBoost parameter of 98.19%. The best model was obtained in the trial with parameters max depth of 3, n estimator of 100, gamma of 0, and learning rate of 0.3 and 0.5 after modifying the parameters, yielding an accuracy rate of 98.64%, a sensitivity rate of 98.43%, and a specificity rate of 99.72%. This research shows that tuning the XGBoost parameters for Celiac
Integration of Deep Learning and Autoregressive Models for Marine Data Prediction Mukhlis Mukhlis; Puput Yuniar Maulidia; Achmad Mujib; Adi Muhajirin; Alpi Surya Perdana
MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer Vol. 24 No. 1 (2024)
Publisher : Universitas Bumigora

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30812/matrik.v24i1.4032

Abstract

Climate change and human activities significantly affect the dynamics of the marine environment, making accurate predictions essential for resource management and disaster mitigation. Deep learning models such as Long Short-Term Memory excel at capturing non-linear temporal patterns, while autoregressive models handle linear trends to improve prediction accuracy. This aim study predicts sea surface temperature, height, and salinity using deep learning compared to Moving Average and Autoregressive Integrated Moving Average methods. The research methods include spatial gap analysis, temporal variability modeling, and oceanographic parameter prediction. The relationship betweenparameters is analyzed using the Pearson Correlation method. The dataset is divided into 80% training and 20% test data, with prediction results compared between Long Short-Term Memory, Moving Average, and Autoregressive models. The results show that Long Short-Term Memory performs best with a Root Mean Squared Error of 0.1096 and a Mean Absolute Error of 0.0982 for salinity at 13 sample points. In contrast, Autoregressive models produce a Root Mean Squared Error of 0.193 for salinity, 0.055 for sea surface height, and 2.504 for sea surface temperature, with a correlation coefficient 0.6 between temperature and sea surface height. In conclusion, the Long Short Term Memory model excels in predicting salinity because it is able to capture complex non-linear patterns. Meanwhile, Autoregressive models are more suitable for linear data trends and explain the relationship between parameters, although their accuracy is lower in salinity prediction. This approach
Identify the Condition of Corn Plants Using Gray Level Co-occurrence Matrix and Bacpropagation Abd Mizwar A. Rahim; Theopilus Bayu Sasongko
MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer Vol. 24 No. 2 (2025)
Publisher : Universitas Bumigora

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30812/matrik.v24i2.4035

Abstract

This research aims to increase the accuracy of identifying the condition of corn plants based on leaf features using the GLCM and ANN Backpropagation methods. The GLCM method is used to extract features from corn leaf images, while Backpropagation ANN is used to classify the condition of corn plants based on these features. This classification was carried out using a dataset of corn leaves from four different conditions, namely healthy, leaf-spot, leaf-blight, and leaf-rust. Next, leaf features are extracted using the GLCM method. After that, data normalization was carried out, balancing the dataset, and training was carried out on the Backpropagation ANN model to classify the condition of the corn plants. After training the model, the next model evaluation is carried out using the confusion matrix method. The research results show that the method used can produce quite high accuracy when identifying the condition of corn plants, with an accuracy of 99%. This shows that the use of GLCM and ANN Backpropagation can be a good alternative in identifying the condition of corn plants. This research provides benefits in making it easier to accurately identify the condition of corn plants.

Page 6 of 42 | Total Record : 420