cover
Contact Name
Mesran
Contact Email
mesran.skom.mkom@gmail.com
Phone
+6282161108110
Journal Mail Official
mib.stmikbd@gmail.com
Editorial Address
Jalan sisingamangaraja No 338 Medan, Indonesia
Location
Kota medan,
Sumatera utara
INDONESIA
JURNAL MEDIA INFORMATIKA BUDIDARMA
ISSN : 26145278     EISSN : 25488368     DOI : http://dx.doi.org/10.30865/mib.v3i1.1060
Decission Support System, Expert System, Informatics tecnique, Information System, Cryptography, Networking, Security, Computer Science, Image Processing, Artificial Inteligence, Steganography etc (related to informatics and computer science)
Articles 1,182 Documents
Analyzing the Sentiment of the 2024 Election Sirekap Application Using Naïvee Bayes Algorithm Muhammad, Isa Ali; Rakhmawati, Desty; Wijaya, Anugerah Bagus
JURNAL MEDIA INFORMATIKA BUDIDARMA Vol 8, No 3 (2024): Juli 2024
Publisher : Universitas Budi Darma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30865/mib.v8i3.7773

Abstract

One of the most recent types of elections is the 2024 election, which includes the election of the president as well as legislative members. Along with the development of technology, an application called Sirekap emerged which is used to recapitulate the results of the vote. Although the app only has a one-star rating on the Play Store, reading all the user reviews to know the quality takes quite a while. Therefore, sentiment analysis can be an alternative to get an overview of user reviews so that it can help in making better decisions then, the method that will be used in conducting sentiment analysis in this study is the naïve Bayes algorithm. This research aims to identify and categorize user sentiment, as well as evaluate the quality of the app based on reviews provided on the Playstore. This research contributes by providing an efficient method to analyze user reviews of the Sirekap app, which can assist app developers and other stakeholders in making better decisions regarding app development and improvement. In addition, the results of this study confirm that the app's one-star rating is accurate, with evaluation metrics such as precision, memory, and f1 score reaching 1.00 each.
Penerapan Artificial Neural Network Untuk Memprediksi Error dalam Perancangan Aplikasi Monitoring Tetes Cairan Infus Astutik, Liya Yuni; Syafii, Imam
JURNAL MEDIA INFORMATIKA BUDIDARMA Vol 8, No 3 (2024): Juli 2024
Publisher : Universitas Budi Darma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30865/mib.v8i3.7724

Abstract

Decision-making in error prediction during the design process of infusion drip monitoring applications plays a crucial role. Designing this application is necessary because manual monitoring by medical staff is prone to errors and inaccuracies. Therefore, the need for accurate predictions in both planning and error management must be further investigated. This research discusses the benefits of the Artificial Neural Network (ANN) methodology in addressing error values in infusion drip monitoring applications during the design process. ANN is chosen for its ability to handle data complexity and non-linear patterns in infusion drip rates. Errors in infusion dosage can be fatal, ranging from patient instability to severe complications. Designing infusion drip monitoring applications automates the process and ensures accuracy, reducing the workload of medical staff and enhancing patient safety. This application also allows for more consistent and real-time monitoring, enabling quicker medical intervention when issues arise. The ANN methodology used includes both forwardpropagation and backpropagation, employing a binary sigmoid activation function with a learning rate of 0.03 and a maximum epoch setting of up to 1000. The research results indicate that the model-building procedure consists of several stages: (1) Determining input based on infusion drip rate readings. (2) Splitting the data into training and testing datasets. (3) Normalizing the data. (4) Building the forwardpropagation and backpropagation algorithm by determining the number of hidden layers, optimal input, and model weights. (5) Denormalizing the data. (6) Testing the model's accuracy. The ANN simulation revealed the best network structure using a 3-40-1 configuration (3 input variables, 40 hidden layers, and 1 output). The results achieved an average error prediction accuracy of 98.6%.
Meningkatkan Akurasi Deteksi Berita Palsu dengan Pendekatan Berbasis Lexicon dan LSTM melalui Text Preprocessing dan Model Training Prastyo, Edwin Hari Agus; Faisal, M
JURNAL MEDIA INFORMATIKA BUDIDARMA Vol 8, No 3 (2024): Juli 2024
Publisher : Universitas Budi Darma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30865/mib.v8i3.7847

Abstract

Hoax news is an issue that is troubling the global community, including Indonesia. The spread of hoax news can cause various negative impacts, such as social division, public unrest, and can even endanger life safety. Hoaxes have become an epidemic in Indonesia, with 11,357 hoax issues identified by the Ministry of Communication and Information from August 2018 to March 2023. The combined approach of Lexicon-Based and LSTM results in improved accuracy in detecting hoax news. The combination of lexicon filters and pre-trained LSTM enables the model to identify hoax keywords and classify news with an accurate final score. Experimental results show that the use of Adam's optimizer produces high accuracy, achieving precision =1.0, recall=1.0, F1-score =1.0, and accuracy of 0.99. The model is able to perfectly distinguish between hoax and non-hoax news, demonstrating the effectiveness of using combined techniques and the right optimizer. However, there are some drawbacks that need to be considered, such as the reliance on a lexicon that may be incomplete and the potential for overfitting of the LSTM model. The results of this study provide insight into the importance of combined techniques in fake news detection, as well as the need for parameter adjustments and optimization strategies to minimize the drawbacks.
Perbaikan Akurasi Naïve Bayes dengan Chi-Square dan SMOTE Dalam Mengatasi High Dimensional dan Imbalanced Data Banjir Rivaldo, Vito Junivan; Siswa, Taghfirul Azhima Yoga; Pranoto, Wawan Joko
JURNAL MEDIA INFORMATIKA BUDIDARMA Vol 8, No 3 (2024): Juli 2024
Publisher : Universitas Budi Darma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30865/mib.v8i3.7886

Abstract

Floods are one of the natural disasters that frequently occur in Indonesia. The city of Samarinda is affected by floods every year, resulting in significant losses. The data used in this study comes from the Regional Disaster Management Agency (BPBD) and the Meteorology, Climatology, and Geophysics Agency (BMKG) for the years 2021-2023 in Samarinda. This data includes 11 attributes and 1095 records. Previous studies on data mining related to floods have been conducted. However, issues arise with high-dimensional data and data imbalance. High dimensionality leads to overfitting and reduced accuracy, while imbalanced data causes overfitting to the majority class and inaccurate representation. This study aims to improve the accuracy of the Naive Bayes algorithm in predicting high-dimensional and imbalanced flood data. The approach involves using the Chi-Square feature selection technique and oversampling with the Synthetic Minority Over-sampling Technique (SMOTE). Chi-Square is used to find optimal features for predicting floods and to enhance the accuracy of the Naive Bayes algorithm in predicting high-dimensional and imbalanced flood data. The validation method used is 10-fold cross-validation, and a confusion matrix model is employed to calculate accuracy values. The results of the study show that Chi-Square can identify four best features: average humidity (rh_avg), rainfall (rr), maximum wind direction (ddd_x), and most frequent wind direction (ddd_car). The use of the Naive Bayes algorithm with SMOTE achieved an accuracy of 71.58%. However, after applying Chi-Square feature selection, the accuracy dropped to 60.82%. This decline is attributed to the reduced number of minority classes after feature selection. Therefore, Chi-Square feature selection is not sufficiently effective in improving the accuracy of Naive Bayes on high-dimensional data.
Analisis Sentimen Calon Presiden 2024 di Media Sosial X Menggunakan Naive Bayes dan SMOTE Sunata, Muhamad Hafidz Ardian; Irwiensyah, Faldy; Hasan, Firman Noor
JURNAL MEDIA INFORMATIKA BUDIDARMA Vol 8, No 3 (2024): Juli 2024
Publisher : Universitas Budi Darma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30865/mib.v8i3.7708

Abstract

In the era of digital advancement, the utilization of social media has surged, enabling individuals to express their viewpoints openly. This research underscores the utilization of social media platform X as the primary avenue for users to express their opinions, particularly on political matters, notably within the framework of the presidential election. Sentiment analysis techniques, specifically employing the Naïve Bayes algorithm and the Synthetic Minority Oversampling (SMOTE) method, have been the central focus of inquiry to infer people's inclinations toward presidential candidates. Despite numerous antecedent studies, deficiencies persist in terms of precision and data imbalance. This study endeavors to enhance the efficacy of sentiment analysis by integrating the Naïve Bayes approach with SMOTE. By scrutinizing tweets on social media X spanning from December 12, 2023, to March 31, 2024, the data is categorized into positive and negative sentiments. The findings reveal that employing SMOTE bolstered accuracy to 88% for the Ganjar-Mahfud dataset, whereas accuracy without SMOTE languished at approximately 69% for the Anies-Imin dataset. Out of 1589 tweets conveying positive sentiments, approximately 27.7% were directed towards Anies-Imin, 28.7% towards Prabowo-Gibran, and 43.5% towards Ganjar-Mahfud. The preponderance of negative sentiments was aimed at Anies-Imin (41.5%) and Prabowo-Gibran (40.8%).
Optimizing Sentiment Analysis of Working Hours Impact on Generation Z’s Mental Health Using Backpropagation Farsya, Nabila Zibriza; Luthfiarta, Ardytha; Maharani, Zahra Nabila; Ganiswari, Syuhra Putri
JURNAL MEDIA INFORMATIKA BUDIDARMA Vol 8, No 3 (2024): Juli 2024
Publisher : Universitas Budi Darma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30865/mib.v8i3.7827

Abstract

The topic of working hours' impact, Generation Z, and mental health are discussions that are often found on social media such as X (used to be Twitter). The sentiment analysis addressing these topics is needed to find out people’s opinions regarding these topics. It could also be helpful as a consideration for the decision-making process for related topics research. Therefore, this research aims to improve the accuracy of the model generated by the previous sentiment analysis research regarding the working hours’ impact on Gen Z’s mental health. The contribution of this research is by building a robust Backpropagation Neural Network model and utilizing SMOTETomek to achieve higher accuracy. This research compared two oversampling techniques for data balancing: SMOTE and SMOTETomek. The result shows that this research has successfully outperformed the baseline research with the best accuracy of 91% using SVM by generating the best accuracy of 93.01% with SMOTETomek. For comparison, SMOTETomek has outperformed SMOTE by generating the best accuracy of 93.01%, while the best accuracy generated with SMOTE is 92.26%. It indicates that in the case of Indonesian text sentiment analysis of this research, SMOTETomek has a better effect compared to SMOTE.
Segmentasi Pelanggan Menggunakan Fuzzy C-Means dan FP-Growth Berdasarkan Model LRFM untuk Rekomendasi Produk Rahmah, Astriana; Afdal, M
JURNAL MEDIA INFORMATIKA BUDIDARMA Vol 8, No 3 (2024): Juli 2024
Publisher : Universitas Budi Darma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30865/mib.v8i3.7737

Abstract

Bazmart Pelalawan is a part of the National Zakat Agency (BAZNAS) program in Pelalawan Regency, which has implemented strategies to retain customers. However, these strategies have not yet succeeded in fully understanding customer characteristics, resulting in a decline in customer trust and their willingness to shop again. Additionally, Bazmart lacks proper guidelines for offering products that meet customer needs. This research aims to enhance product recommendations by integrating LRFM analysis into data mining techniques. The parameters considered include customer LRFM values, customer segmentation, and products frequently purchased together over a year of transaction data. Fuzzy C-Means and FP-Growth algorithms were used for segmentation and association analysis. The segmentation results identified two customer clusters with a Davies-Bouldin Index (DBI) value of 0.628, indicating good cluster quality. In the association analysis, a minimum support (minsup) of 30% and a minimum confidence (mincof) of 70% were used, resulting in 8 rules for cluster 1 and 17 rules for cluster 2. From the two association pattern results, the highest rules were obtained, namely in Drinks and Snacks and Bread with a support value of 0.426 and a confidence value of 0.926 resulting in a value of 0.394. These rules provide insights that Bazmart Pelalawan can use to develop more effective and targeted direct marketing strategies for each customer cluster. Thus, this research is expected to help Bazmart Pelalawan better understand customer characteristics and improve customer loyalty through more targeted product recommendations.
Analisis Metode Smoote pada Klasifikasi Penyakit Jantung Berbasis Random Forest Tree Yulianto, Satria Pradana Rizki; Fanani, Ahmad Zainul; Affandy, Affandy; Aziz, Mochammad Ilham
JURNAL MEDIA INFORMATIKA BUDIDARMA Vol 8, No 3 (2024): Juli 2024
Publisher : Universitas Budi Darma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30865/mib.v8i3.7712

Abstract

Cardiovascular disease is the number one cause of death globally. Cardiovascular disease is a disease caused by impaired function of the heart and blood vessels. At present, there are many predictive tools that use machine learning as a basis, including predictions on heart disease in particular. There are many methods in machine learning to predict heart disease, as well as many parameters to look for to find the highest level of accuracy. This study, aims to obtain the best methods and parameters for the classification of heart disease.
Faktor Penentu Kejadian Hipertensi Di RSUD TK II Putri Hijau Medan Tahun 2023 Simatupang, Devina Oktora; Wandra, Toni; Tarigan, Frida Lina; Sitorus, Mido Ester J; Nababan, Donal
JURNAL MEDIA INFORMATIKA BUDIDARMA Vol 8, No 3 (2024): Juli 2024
Publisher : Universitas Budi Darma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30865/mib.v8i3.8264

Abstract

The purpose of this study was to determine the determinants of hypertension at TK II PutriHijau Hospital in 2023.This research method uses a cross-sectional study. The size of the research sample is 192, obtained by systematic rhythmic sampling. The independent variables in the study consisted of a history of hypertension, high salt consumption, high fat consumption, less consumption of vegetables and fruit, lack of physical activity, smoking habits, and obesity, while the dependent variable was the incidence of hypertension. Data analysis was carried out using univariate, bivariate and multivariate methods.The results showed that there was a significant relationship between history of hypertension, high salt consumption, high fat consumption, less consumption of vegetables and fruit, lack of physical activity, and smoking habits with the incidence of hypertension (p value 0.05), while the obesity variable was not significantly related with hypertension. Multivariately, it was found that five independent variables influenced the incidence of hypertension, namely high consumption of salt (p value= 0.000; OR= 24,2), high consumption of fat (p value= 0.000; OR= 47,7), less consumption of vegetables and fruit (p value = 0.000; OR= 14,1), and lack of physical activity (p value= 0.000; OR= 9,2). So that,the dominantvariable associated with the incidence of hypertension is high consumption of fat, which has a 48 times greater risk of developing hypertension
Pengembalian Data Yang Hilang Pada Dataset Dengan Menggunakan Algoritma K-Nearest Neighbor Imputation Data Mining Bangun, Budianto; Karim, Abdul Karim
JURNAL MEDIA INFORMATIKA BUDIDARMA Vol 8, No 3 (2024): Juli 2024
Publisher : Universitas Budi Darma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30865/mib.v8i3.8014

Abstract

One of the things that is really hoped for when collecting data is to produce complete data. In research, incomplete data will affect the results obtained. because the process carried out in the research was not optimal. A dataset is a collection of information that is stored for a long time and becomes a large pile of data. Missing values in the dataset will be an important problem and must be handled in research. Therefore, data recovery is needed. Data mining is a process carried out in computer research. Where data mining will process data that has been collected first, either data collected by yourself (primary data) or data that has been collected in a dataset (secondary data). Recovery is the process of recovering data that is lost or cannot be found. The K-Nearest Neighbor Imputation algorithm is a system that uses a supervised learning algorithm and aims to discover new data patterns by connecting existing data patterns with new data. KNNI is an approach used to identify objects based on certain information, namely the closest distance to the object