Claim Missing Document
Check
Articles

Found 9 Documents
Search

Klasifikasi Halaman SEO Berbasis Machine Learning Melalui Mutual Information dan Random Forest Feature Importance NURADILLA, SITI; SADIK, KUSMAN; SUHAENI, CICI; SOLEH, AGUS M
MIND (Multimedia Artificial Intelligent Networking Database) Journal Vol 10, No 1 (2025): MIND Journal
Publisher : Institut Teknologi Nasional Bandung

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26760/mindjournal.v10i1.114-129

Abstract

AbstrakProses optimasi SEO melibatkan banyak faktor yang saling terkait, sehingga sulit bagi tim SEO dalam menentukan halaman mana yang memerlukan perbaikan lebih lanjut. Penelitian ini bertujuan untuk mengembangkan model berbasis machine learning yang tidak hanya akurat dalam mengklasifikasikan halaman, tetapi juga efisien dalam memilih fitur yang paling informatif. Metode yang digunakan dalam penelitian ini melibatkan seleksi fitur menggunakan Mutual Information (MI) dan Random Forest Feature Importance (RFFI) untuk mengidentifikasi faktor-faktor yang paling penting untuk optimasi SEO, yang dimodelkan menggunakan Random Forest dan Weighted Voting Ensemble (WVE). Model dievaluasi berdasarkan Accuracy, Precision, Recall, dan ROC AUC. Hasil penelitian menunjukkan bahwa model Random Forest dengan 20 fitur berdasarkan RFFI, memberikan performa terbaik dengan ROC AUC sebesar 75.87%, Accuracy 77,74%, Precision 60,51%, dan Recall 71.29%. Model mampu membedakan secara efektif halaman yang membutuhkan optimasi SEO atau tidak.Kata kunci: Feature Importance, Random Forest, SEO, Seleksi Variabel, WVEAbstractThe SEO optimization process involves many interrelated factors, making it challenging to identify which pages need further improvement. This study proposes a machine learning-based model that is accurate in classifying web pages and efficient in selecting the most relevant features. Feature selection is performed using Mutual Information (MI) and Random Forest Feature Importance (RFFI) to identify key factors for SEO optimization, followed by modeling with Random Forest and Weighted Voting Ensemble (WVE). The model is evaluated using Accuracy, Precision, Recall, and ROC AUC. Results indicate that the Random Forest model with 20 features selected via RFFI delivers the best performance, achieving a ROC AUC of 75.87%, Accuracy of 77.74%, Precision of 60.51%, and Recall of 71.29%. The model effectively distinguishes between pages that require SEO optimization and those that do not.Keywords: Feature Importance, Random Forest, SEO, Variable Selection, WVE
Exploring a Large Language Model on the ChatGPT Platform for Indonesian Text Preprocessing Tasks Suhaeni, Cici; Kamila, Sabrina Adnin; Fahira, Fani; Yusran, Muhammad; Alfa Dito, Gerry
Indonesian Journal of Statistics and Applications Vol 9 No 1 (2025)
Publisher : Departemen Statistika, IPB University dengan Forum Perguruan Tinggi Statistika (FORSTAT)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29244/ijsa.v9i1p100-116

Abstract

Preprocessing is a crucial step in Natural Language Processing, especially for informal languages like Indonesian, which contain complex morphology, slang, abbreviations, and non-standard expressions. Traditional rule-based tools such as regex, IndoNLP, and Sastrawi are commonly used but often fall short in handling noisy, user-generated text. This study explores the capability of Large Language Model, particularly ChatGPT-o3, in performing Indonesian text preprocessing tasks, namely text cleaning, normalization, stopword removal, and stemming/lemmatization, and compares it to conventional rule-based approaches. Using two types of datasets, consisting of a small example dataset of five manually constructed sentences and a real-world dataset of 100 tweets about the Indonesian “Makan Bergizi Gratis” program, both preprocessing methods were applied and evaluated. Results show that ChatGPT-o3 performs equally well in text cleaning and significantly better in normalization. However, rule-based methods like IndoNLP and Sastrawi still outperform ChatGPT-o3 in stopword removal and stemming. These findings indicate that while ChatGPT-o3 demonstrates strong contextual understanding and linguistic flexibility, they may underperform in rigid, token-based operations without fine-tuning. This study provides initial insights into using Large Language Models as an alternative preprocessing engine for Indonesian text and highlights the need for hybrid approaches or improved prompt design in future applications.
Pemodelan Topik pada Komentar YouTube Arra: Komparasi LDA dan K-Means Menggunakan Fitur Leksikal dan Semantik Nuradilla, Siti; Kamila, Sabrina Adnin; Zahra, Latifah; Suhaeni, Cici; Sartono, Bagus
Jurnal Informatika: Jurnal Pengembangan IT Vol 10, No 3 (2025)
Publisher : Politeknik Harapan Bersama

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30591/jpit.v10i3.8763

Abstract

YouTube has become a platform for sharing content, including positive material and stereotypes that often trigger debates. One noteworthy phenomenon is the video of Arra, a toddler known for her remarkable communication skills. This uniqueness has drawn significant attention and sparked debates about the mismatch between her age and cognitive development. The diverse comments on Arra’s videos reflect sharply differing perspectives among netizens, making manual analysis highly challenging. Therefore, it is important to examine the topics discussed by netizens to understand the dominant issues emerging in these discussions. Through this approach, the public can gain insights, and parents may receive valuable input regarding child-rearing practices. The main objective of this study is to explore the effectiveness of the two methods and their combinations of text representations in identifying key topics within comments by comparing the coherence performance of the models. This research applies topic modeling to analyze comments using two primary approaches: Latent Dirichlet Allocation (LDA) and K-Means clustering. The study involves data collection through comment crawling, followed by text preprocessing and text representation using TF-IDF and GloVe embeddings. LDA and K-Means are then used to identify dominant topics appearing in the comments. The results show that LDA with TF-IDF achieved the highest coherence score of 0.662, although the resulting topics were still difficult to interpret due to overlap. Meanwhile, K-Means with GloVe 100D yielded a slightly lower coherence score of 0.6538 but outperformed in terms of interpretability. Therefore, K-Means with GloVe 100D is considered a more balanced approach in terms of both coherence and topic readability.
Sentiment Classification on the 2024 Indonesian Presidential Candidate Dataset Using Deep Learning Approaches Suhaeni, Cici; Wijayanto, Hari; Kurnia, Anang
Indonesian Journal of Statistics and Applications Vol 8 No 2 (2024)
Publisher : Departemen Statistika, IPB University dengan Forum Perguruan Tinggi Statistika (FORSTAT)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29244/ijsa.v8i2p83-94

Abstract

This study aims to compare the performance of three deep learning models (LSTM, BiLSTM, and GRU) in the task of sentiment classification for the 2024 Indonesian Presidential Candidate dataset, focusing specifically on the case of Prabowo Subianto. The dataset comprises social media X posts sourced from kaggle, and the analysis investigates the effectiveness of different variants of recurrent neural network architectures in identifying public sentiment. The models were evaluated on accuracy and F1 score. The results demonstrate that BiLSTM outperformed both LSTM and GRU models in all metrics, achieving a testing accuracy of 80.70% and an F1 score of 86.86%, compared to LSTM and GRU which both achieved a testing accuracy of 72.56% and an F1 score of approximately 84%. The higher performance of BiLSTM is attributed to its ability to capture bidirectional context within the text, thereby understanding complex sentiment patterns more effectively. LSTM and GRU models displayed similar performance, therefore BiLSTM is the best model for this dataset. These results indicate that BiLSTM is especially well-suited for analyzing public sentiment towards political figures like Prabowo Subianto, offering significant insights into public discussions surrounding the 2024 Indonesian Presidential Election. This study recommends exploring transformer-based models like BERT or GPT variants to enhance sentiment classification accuracy in this domain.
Effect of Feature Normalization and Distance Metrics on K-Nearest Neighbors Performance for Diabetes Disease Classification Yusran, Muhammad; Sadik, Kusman; Soleh, Agus M; Suhaeni, Cici
Journal of Mathematics, Computations and Statistics Vol. 8 No. 2 (2025): Volume 08 Nomor 02 (Oktober 2025)
Publisher : Jurusan Matematika FMIPA UNM

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35580/jmathcos.v8i2.8012

Abstract

Diabetes is a global health issue with a steadily increasing prevalence each year. Early detection of the disease is an important step in preventing severe complications. The K-Nearest Neighbors (KNN) algorithm is often used in disease classification, but its performance is highly influenced by the choice of normalization method and distance metric used. This study aims to evaluate the effect of various normalization methods and distance metrics on the performance of the KNN algorithm in diabetes disease classification. The three normalization methods were employed: z-score normalization, min-max scaling, and median absolute deviation (MAD). In addition, the seven distance metrics were assessed: Euclidean, Manhattan, Chebyshev, Canberra, Hassanat, Lorentzian, and Clark. The dataset used is Pima Indians Diabetes which consists of 768 observations and 8 features. The data were split into 80% training data and 20% test data, and using 5-fold cross-validation to determine the optimal k value. The results show that the MAD-Canberra combination produces the highest overall accuracy, recall, and F1-score of 87.32%, 82.33%, and 81.94%, respectively. The highest precision was obtained from the Baseline-Hassanat combination at 86.96%, while the lowest performance was observed for the Z-Score-Chebyshev combination with F1-Score 58.02%. These results highlight that no single combination universally outperforms others, underscoring the need for empirical evaluation. Nonetheless, combining MAD normalization with metrics such as Canberra or Hassanat can serve as a strong starting point for developing KNN-based classification systems, especially in medical contexts that are sensitive to misclassification.
Analysis and Optimization of Rainfall Prediction in Makassar City Using Artificial Neural Networks Based on Data Augmentation, Regularization, and Bayesian Optimization Abdullah, Adib Roisilmi; Sadik, Kusman; Suhaeni, Cici; Saleh, Agus Muhammad
Journal of Mathematics, Computations and Statistics Vol. 8 No. 2 (2025): Volume 08 Nomor 02 (Oktober 2025)
Publisher : Jurusan Matematika FMIPA UNM

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35580/jmathcos.v8i2.8304

Abstract

This study develops a robust and efficient rainfall prediction model using an Artificial Neural Network (ANN), significantly enhanced through integrated data augmentation, regularization, and Bayesian optimization techniques. We utilized a dataset of 118 monthly rainfall records from Makassar City, spanning 2014–2022, sourced from the Meteorological, Climatological, and Geophysical Agency (BMKG). To effectively capture inherent temporal patterns, lag features (specifically lag-1, lag-3, and lag-6 rainfall values) were meticulously constructed as input variables. Subsequently, Min-Max normalization was applied across all features, ensuring input consistency and optimizing the ANN's learning process. An initial manual grid search identified the most effective baseline ANN architecture, featuring four hidden layers ([128, 32, 16, 64] neurons), a tanh activation function, and a learning rate of 0.01. While the baseline ANN model achieved a commendable initial performance with an RMSE of 0.1608, comprehensive experiments revealed the superior benefits of a fully integrated approach. This advanced model, which synergistically combined data augmentation (to address data limitations and enhance generalization), regularization (to mitigate overfitting), and Bayesian optimization (for efficient hyperparameter tuning), demonstrated significantly improved generalization capabilities and enhanced model stability. This integrated model yielded an RMSE of 0.1861, an MSE of 0.0346, and an MAE of 0.1359. These compelling findings unequivocally underscore that integrated optimization strategies are crucial for developing more robust and reliable ANN-based rainfall prediction models, particularly for critical applications in climate-based time series forecasting.
Comparison of LASSO, Ridge, and Elastic Net Regularization with Balanced Bagging Classifier Nisrina Az-Zahra, Putri; Sadik, Kusman; Suhaeni, Cici; Mohamad Soleh, Agus
Parameter: Jurnal Matematika, Statistika dan Terapannya Vol 4 No 2 (2025): Parameter: Jurnal Matematika, Statistika dan Terapannya
Publisher : Jurusan Matematika FMIPA Universitas Pattimura

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30598/parameterv4i2pp287-296

Abstract

Predicting Drug-Induced Autoimmunity (DIA) is crucial in pharmaceutical safety assessment, as early identification of compounds with autoimmune risk can prevent adverse drug reactions and improve patient outcomes. Classification analysis often faces challenges when the number of predictor variables exceeds the number of observations or when high correlations among predictors lead to multicollinearity and overfitting. Regularization methods, such as Ridge Regression, Least Absolute Shrinkage and Selection Operator (LASSO), and Elastic-Net, help stabilize parameter estimation and improve model interpretability. This study focuses on building a binary classification model to predict the risk of DIA using 196 molecular descriptors derived from chemical compound structures. To address class imbalance in the response variable, the Balanced Bagging Classifier (BBC) is combined with regularized logistic regression models. Elastic Net + BBC outperforms other models with the highest accuracy (0.825), followed closely by LASSO + BBC and Ridge + BBC (both 0.816). This integration not only improves classification accuracy but also enhances generalization and the reliable detection of minority class instances, supporting the early identification of autoimmune risks in drug discovery.
EVALUATING RANDOM FOREST AND XGBOOST FOR BANK CUSTOMER CHURN PREDICTION ON IMBALANCED DATA USING SMOTE AND SMOTE-ENN Andespa, Reyuli; Sadik, Kusman; Suhaeni, Cici; Soleh, Agus M
MEDIA STATISTIKA Vol 18, No 1 (2025): Media Statistika
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.14710/medstat.18.1.25-36

Abstract

The banking industry faces significant challenges in retaining customers, as churn can critically affect both revenue and reputation. This study introduces a robust churn prediction framework by comparing the performance of XGBoost and Random Forest algorithms under imbalanced data conditions. The novelty of this research lies in integrating the SMOTE and SMOTE-ENN techniques with machine learning algorithms to enhance model performance and reliability on highly imbalanced datasets. Unlike conventional approaches that rely solely on oversampling or undersampling, this study demonstrates that the hybrid combination of XGBoost and SMOTE provides superior predictive accuracy, stability, and efficiency. Hyperparameter optimization using GridSearchCV was conducted to identify the most effective parameter configurations for both algorithms. Model performance was evaluated using the F1-Score and Area Under the Curve (AUC). The results indicate that XGBoost with SMOTE achieved the best performance, with an F1-Score of 0.8730 and an AUC of 0.9828, showing an optimal balance between precision and recall. Feature importance analysis identified Months_Inactive_12_mon, Total_Trans_Amt, and Total_Relationship_Count as the most influential predictors. Overall, this approach outperforms traditional resampling and modeling techniques, providing practical insights for data-driven customer retention strategies in the banking industry.
Evaluating Fasttext and Glove Embeddings for Sentiment Analysis of AI-Generated Ghibli-Style Images Sentana Putra, I Gusti Ngurah; Yusran, Muhammad; Sari, Jefita Resti; Suhaeni, Cici; Sartono, Bagus; Dito, Gerry Alfa
Journal of Applied Informatics and Computing Vol. 9 No. 5 (2025): October 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

The development of text-to-image generation technology based on artificial intelligence has triggered mixed public reactions, especially when applied to iconic visual styles such as Studio Ghibli. This research aims to evaluate public sentiment towards the phenomenon of Ghibli-style AI images by comparing two static word embedding methods, namely FastText and GloVe, on three classification algorithms: Logistic Regression, Random Forest, and Convolutional Neural Network (CNN). Data in the form of Indonesian tweets were collected from Twitter using hashtags such as #ghibli, #ghiblistyle, and #hayaomiyazaki during the period 25 March to 25 April 2025. Each tweet was manually labelled with positive or negative sentiment, then preprocessed and represented using pre-trained FastText and GloVe embeddings. Evaluation was conducted using accuracy, precision, recall, and F1-score metrics, both macro and weighted. Results showed that FastText consistently performed the best on most models, especially in terms of precision and overall accuracy, thanks to its ability to handle sub-word information and spelling variations in social media texts. The combination of CNN with FastText yielded the highest performance with a macro F1-score of 76.56% and accuracy of 84.69%. However, GloVe still showed competitive performance in recall on the Logistic Regression model, making it relevant for contexts that prioritise sentiment detection coverage. This study emphasizes the importance of selecting embeddings and models that are appropriate to the characteristics of the data and the purpose of the analysis in informal social media-based sentiment classification.