Claim Missing Document
Check
Articles

Found 6 Documents
Search

PREDIKSI POTENSIAL GEMPA BUMI INDONESIA MENGGUNAKAN METODE RANDOM FOREST DAN FEATURE SELECTION Tantyoko, Henri; Sari, Dian Kartika; Wijaya, Andreas Rony
IDEALIS : InDonEsiA journaL Information System Vol 6 No 2 (2023): Jurnal IDEALIS Juli 2023
Publisher : Universitas Budi Luhur

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36080/idealis.v6i2.3036

Abstract

Gempa bumi adalah suatu peristiwa alamiah yang terjadi saat terjadi pelepasan energi secara tiba-tiba dalam kerak bumi, mengakibatkan getaran dan guncangan pada permukaan bumi. Gempa bumi merupakan salah satu bencana alam yang dapat menyebabkan kerusakan fisik yang besar, dampak ekonomi yang signifikan, dan hilangnya nyawa manusia. Beberapa penyebab gempa bumi antara lain aktivitas tektonik lempeng bumi, pergerakan lempeng tektonik, dan deformasi kerak bumi. Untuk mengurangi jumlah korban jiwa, perlu dilakukan prediksi kapan gempa bumi akan terjadi di suatu wilayah. Salah satu cara untuk memprediksi ialah dengan menggunakan metode Machine Learning yaitu Random Forest (RF), metode ini memanfaatkan beberapa pohon keputusan yang selanjutnya dilakukan voting untuk menentukan keputusan akhir prediksi . Model yang baik adalah model yang menghasilkan kesalahan seminimal mungkin. Oleh karena itu, penulis melakukan skema seleksi fitur untuk mengolah fitur-fitur yang memiliki korelasi yang kuat. Prediksi menggunakan RF dengan seleksi fitur menghasilkan F1 score sebesar 92.23%, yang lebih baik 5.02% dibandingkan tanpa menggunakan seleksi fitur. Metode RF + Seleksi Fitur ini juga jauh lebih baik jika dibandingkan metode machine learning tradisional lainnya seperti SVM, Naïve Bayes, dan Decision Tree.
Comparison of the Word2vec Skipgram Model Method Linkaja Application Review using Bidirectional LSTM Algorithm and Support Vector Machine Ayuningtyas, Puji; Tantyoko, Henri
JUSTIN (Jurnal Sistem dan Teknologi Informasi) Vol 12, No 1 (2024)
Publisher : Jurusan Informatika Universitas Tanjungpura

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26418/justin.v12i1.72530

Abstract

Word embedding is a phase in word processing that seeks to convert each word into a vector representation. Word2Vec is a sort of word embedding that is frequently utilized in natural language processing research. Choosing the proper algorithm can help increase the performance of the word embedding method while doing text data categorization tasks. This research uses the Bidirectional LSTM deep learning algorithm and the Support Vector Machine (SVM) machine learning algorithm. The crawling approach was used to obtain data by accessing the LinkAja Application ID on the Google Play Store. The total number of rows in the dataset was 35560. Labeling data involves categorizing it into two target classes: positive (score 1) and negative (score 0). This study employs the Word2Vec approach with skipgram architecture during the vectorization stage. Vector size, window, min count, and sg are the four parameters employed. The bidirectional LSTM architecture employs a sequential model that consists of three neural network layers: embedding, bidirectional, and dense. In the meanwhile, the SVM architecture employs the Radial Basis Function (RBF) kernel parameters. For the final stage of algorithm testing, the accuracy of the bidirectional LSTM (BiLSTM) algorithm was 0.9505, which means it was higher than the support vector machine (SVM) algorithm with an accuracy value of 0.93.
Classification of Real and Fake Images Using Error Level Analysis Technique and MobileNetV2 Architecture Baihaqi, Muhamad Nur; Sugiharto, Aris; Tantyoko, Henri
Jurnal Masyarakat Informatika Vol 16, No 1 (2025): May 2025
Publisher : Department of Informatics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.14710/jmasif.16.1.73283

Abstract

Advancements in technology have made image forgery increasingly difficult to detect, raising the risk of misinformation on social media. To address this issue, Error Level Analysis (ELA) feature extraction can be utilized to detect error level variations in lossy-formatted images such as JPEG. This study evaluates the contribution of ELA features in classifying authentic and forged images using the MobileNetV2 model. Two scenarios were tested using the CASIA 2.0 dataset: without ELA and with ELA. Fine-tuning was performed to adapt the model to the new problem. Experimental results show that incorporating ELA improves model accuracy up to 93.1%, compared to only 76.41% in the scenario without ELA. Validation using k-fold cross-validation yielded a high average f1-score of 96.83%, confirming the effectiveness of ELA in enhancing the classification performance of authentic and forged images.
A Comparative Study of Machine Learning Models for Short-Term Load Forecasting Vianita, Etna; Tantyoko, Henri
Jurnal Masyarakat Informatika Vol 16, No 1 (2025): May 2025
Publisher : Department of Informatics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.14710/jmasif.16.1.73130

Abstract

Short-Term Load Forecasting (STLF) was a critical task in power system operations, enabling efficient energy management and planning. This study presented a comparative analysis of five machine learning models namely XGBoost, Random Forest, Multi-Layer Perceptron (MLP), Support Vector Regression (SVR), and LightGBM using real-world electricity demand data collected over a four-month period. Two modeling approaches were explored: one using only time-based features (hour, day of the week, month), and another incorporating historical lag features (lag_1, lag_2, lag_3) to capture temporal patterns. The results showed that MLP with lag features achieved the best performance (RMSE: 57.63, MAE: 34.54, MAPE: 0.22), highlighting its ability to model nonlinear and sequential dependencies. In contrast, SVR and LightGBM experienced performance degradation when lag features were added, suggesting sensitivity to feature dimensionality and data volume. These findings emphasized the importance of model-feature alignment and temporal context in improving forecasting accuracy. Future work could explore the integration of external variables such as weather and holidays, as well as the application of advanced deep learning architectures like LSTM or hybrid models to further enhance robustness and generalizability.
An Efficient Bidirectional Gated Recurrent Unit Approach for Student Study Duration Modeling and Timely Graduation Forecasting Purnama, Satriawan Rasyid; Tantyoko, Henri; Vianita, Etna
Jurnal Masyarakat Informatika Vol 16, No 2 (2025): Issue in Progress
Publisher : Department of Informatics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.14710/jmasif.16.2.73275

Abstract

Delays in student graduation remain a persistent challenge in higher education, with approximately 28% of students requiring more than four years to complete their studies, exceeding the standard duration. This study addresses the issue by proposing a predictive model to estimate students’ graduation year using a Bidirectional Gated Recurrent Unit (BiGRU) neural network. The model is trained on a combination of academic and financial indicators, including Grade Point (GP) scores from the first to the fifth semester, cumulative Grade Point Average (GPA), and the single tuition fee tier (UKT). The integration of these features allows the model to learn temporal patterns in students’ academic progression and financial capacity. Empirical analysis reveals that students in the UKT 8 group consistently demonstrate superior academic performance, as evidenced by their higher average GPA across semesters, compared to students in lower UKT groups. The BiGRU model achieves a Mean Absolute Percentage Error (MAPE) of 9.5%, indicating high predictive accuracy. These findings highlight the potential of deep learning models, particularly BiGRU, in forecasting academic outcomes. Furthermore, the insights generated from this model can serve as a valuable tool for universities in formulating targeted academic interventions and policies aimed at promoting timely graduation and reducing dropout rates.
Handling Imbalance Data using Hybrid Sampling SMOTE-ENN in Lung Cancer Classification Latief, Muhammad Abdul; Nabila, Luthfi Rakan; Miftakhurrahman, Wildan; Ma'rufatullah, Saihun; Tantyoko, Henri
International Journal of Engineering and Computer Science Applications (IJECSA) Vol. 3 No. 1 (2024): March 2024
Publisher : Universitas Bumigora Mataram-Lombok

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30812/ijecsa.v3i1.3758

Abstract

The classification problem is one instance of a problem that is typically handled or resolved using machine learning. When there is an imbalance in the classes within the data, machine learning models have a tendency to overclassify a greater number of classes. The model will have low accuracy in a few classes and high accuracy in many classes as a result of the issue. The majority of the data has the same number of classes, but if the difference is too great, it will differ. The issue of data imbalance is also evident in the data on lung cancer, where there are 283 positive classes and negative classes 38. Therefore, this research aims to use a hybrid sampling technique, combining Synthetic Minority Over-sampling Technique (SMOTE) with Edited Nearest Neighbors (ENN) and Random Forest, to balance the data of lung cancer patients who experience class imbalance. This research method involves the SMOTE-ENN preprocessing method to balance the data and the Random Forest method is used as a classification method to predict lung cancer by dividing training data and testing 10-fold cross validation. The results of this study show that using SMOTE-ENN with Random Forest has the best performance compared to SMOTE and without oversampling on all metrics used. The conclusion is using the SMOTE-ENN hybrid sampling technique with the Random Forest model significantly improves the model's ability to identify and classify data.