Claim Missing Document
Check
Articles

Found 13 Documents
Search

Penentuan Upah Minimum Kota Berdasarkan Tingkat Inflasi Menggunakan Backpropagation Neural Network (BPNN) Yohannes, Ervin; Mahmudy, Wayan Firdaus; Rahmi, Asyrofa
Jurnal Teknologi Informasi dan Ilmu Komputer Vol 2, No 1 (2015)
Publisher : Fakultas Ilmu Komputer

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (913.176 KB)

Abstract

Upah Minimum Kota (UMK) adalah sebuah standardisasi upah atau gaji karyawan atau pegawai untuk diterapkan diperusahaan baik itu BUMN, BUMS, maupun perusahaan lain yang berskala besar. Faktor yang mempengaruhi UMK sangat banyak dan beragam salah satunya adalah rata-rata inflasi pengeluaran dimana terdapat 8 kategori yang dipakai. Tulisan ini memaparkan penggunaan Backpropagation Neural Network (BPNN) untuk memprediksi besarnya UMK. Pada tahap uji coba data dibagi menjadi dua bagian yaitu data latih dan data uji, dimana data latih digunakan untuk mencari jumlah iterasi, jumlah hidden layer, dan nilai learning rate yang optimal. Pengujian data latih memberikan hasil yakni jumlah iterasi optimal diperoleh pada saat iterasi 80, sedangkan untuk jumlah hidden layer yang optimal adalah sebanyak satu hidden layer dan untuk nilai learning rate optimal yakni pada saat bernilai 0.8. Semua variabel yang diperoleh dikatakan optimal karena memiliki rata-rata MSE paling kecil dibandingkan dengan data lainnya. Hasil yang diperoleh saat data uji dengan menggunakan iterasi, jumlah hidden layer, dan nilai learning rate yang optimal didapatkan hasil MSE sebesar 0.07280534710552478.
Building Segmentation of Satellite Image based on Area and Perimeter using Region Growing Ervin Yohannes; Fitri Utaminingrum
Indonesian Journal of Electrical Engineering and Computer Science Vol 3, No 3: September 2016
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v3.i3.pp579-585

Abstract

A building can be known by look shape, color, and texture. Building can be detected by using many method. Region growing is one simple segmentation method because only use seed point. Before segmentation, the image must be preprocessing include sharpening, binerization by otsu method. Sharpening for clarify image and otsu method changed image valued 0 and 1. Next step is post-preprocessing include segmentation using region growing and opening closing operation. And the last process is detection building where building of detection will be signed. In this research, we present region growing for building segmentation by using both area and perimeter as a important variable in the region growing. Value of area more than 10 and perimeter is more than 50 are produced most of building.
Traffic Sign Recognition Using Detector-Based Deep Learning Method Mulyono, Alfito; Ervin Yohannes
Indonesian Journal of Engineering and Technology (INAJET) Vol. 7 No. 1 (2024): September 2024
Publisher : Fakultas Teknik Universitas Negeri Surabaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26740/inajet.v7n1.p1-6

Abstract

Traffic is a key element in the transportation system. Traffic is an integral part of urban life and a key element in the transportation system. Traffic safety is a major concern to prevent accidents and ensure safe mobility. Traffic accidents are one of the most common occurrences. . But on the other hand, the increase in road accidents is increasing, which can be caused by people's lack of knowledge about traffic. The main solution to overcome this problem is to increase knowledge about traffic. The application of artificial intelligence, especially object detection methods with the use of detector-based deep learning methods, is one method that has proven efficient in detecting objects in real-time. In this research, object recognition is performed using SSD (Single Shot MultiBox Detector) where the model is trained and tested for its performance in detecting traffic signs in Indonesia. From the research results, the mAP 50 and mAP 50-95 values are 89.66% and 65.49%, respectively.  Keyword: Deep Learning, SSD, Traffic Signs.
Clustering of Human Hand on Depth Image using DBSCAN Method Yohannes, Ervin; Utaminingrum, Fitri; Shih, Timothy K.
Journal of Information Technology and Computer Science Vol. 4 No. 2: September 2019
Publisher : Faculty of Computer Science (FILKOM) Brawijaya University

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1131.492 KB) | DOI: 10.25126/jitecs.201942133

Abstract

In recent years, depth images are popular research in imageprocessing, especially in clustering field. The depth image can captureby depth cameras such as Kinect, Intel Real Sense, Leap Motion, and etc.Many objects and methods can be implemented in clustering field andissues. One of popular object is human hand since has many functionsand important parts of human body for daily routines. Besides, theclustering method has been developed for any goal and even combinewith another method. One of clustering method is Density-Based SpatialClustering of Applications with Noise (DBSCAN) which automaticclustering method consists of minimum points and epsilon. Define theepsilon in DBSCAN is important thing since the result depends on those.We want to look for the best epsilon for clustering human hand in thedepth images. We selected the epsilon from 5 until 100 for getting thebest clustering results. Moreover, those epsilons will be testing in threedistance to get accurate results.
Explainable Artificial Intelligence (XAI) for Identification of Using Obesity Factors Hybrid Artificial Neural Network Approach and SHapley Additive exPlanations Esti, Esti Yogiyanti; Yuni Yamasari; Ervin Yohannes
JIEET (Journal of Information Engineering and Educational Technology) Vol. 9 No. 1 (2025)
Publisher : Universitas Negeri Surabaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26740/jieet.v9n1.p19-27

Abstract

This study aims to develop and evaluate an obesity classification model using an Artificial Neural Network (ANN) combined with Explainable Artificial Intelligence (XAI) techniques based on SHAP (SHapley Additive exPlanations). The model was trained and tested using two different optimizers, Adaptive Moment Estimation (Adam) and Stochastic Gradient Descent (SGD), across multiple train-test ratios and epoch variations. The experimental results indicate that the Adam optimizer consistently outperformed SGD in terms of accuracy, loss value, and stability of evaluation metrics. The best performance was achieved with a 90:10 train-test ratio at 100 epochs, yielding an accuracy of 94.74%, a loss of 0.1899, precision, recall, and an f1-score of 0.95. To improve interpretability, SHAP was applied to identify the most influential features in the classification process. The analysis revealed that features such as Weight, Height, Gender, and Age significantly contribute to the model's predictions. Based on the SHAP interpretation, feature selection was conducted using the top nine features with the highest SHAP values. Retraining the ANN with these selected features resulted in improved performance, achieving 98.56% accuracy, a loss of 0.0638, and a precision, recall, and F1-score of 0.99 . These findings demonstrate that integrating XAI with ANN not only enhances transparency and interpretability but also boosts classification performance and computational efficiency. This approach shows strong potential for supporting decision-making in healthcare, particularly for early detection and intervention in cases related to obesity.
Semantic Segmentation Using the U-Net Architecture on Monocular Datasets Ahmad Fikri Hanafi; Ervin Yohannes
Journal of Informatics and Computer Science (JINACS) Vol. 7 No. 01 (2025)
Publisher : Universitas Negeri Surabaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26740/jinacs.v7n01.p37-42

Abstract

Abstract— This study implements a deep learning model based on the U-Net architecture with a pre-trained ResNet50 backbone on ImageNet to solve the task of semantic segmentation on monocular images. The Cityscapes dataset is used as the main benchmark because it provides high-quality data with high resolution that is widely recognized in urban image segmentation research. Experiments were conducted to evaluate the model's performance with varying learning rate values, aiming to understand the model's sensitivity to training parameters. The results show that a learning rate of 1e-4 yields optimal performance, achieving a Mean Intersection over Union (Mean IoU) of 86.59% and pixel accuracy of 97.63%. Visualization of the segmentation predictions demonstrates the model's ability to accurately recognize urban objects and structures, especially under varying lighting conditions and background complexity. These findings confirm the effectiveness of U-Net in image segmentation tasks, as well as the importance of hyperparameter selection and dataset quality in achieving high model performance in the monocular image domain.   Keywords— Convolusional Neural Network, Deep Learning, U-Net, Encoder-Decoder, Semantic Segmentation
Pre-Trained Convolutional Neural Network Benchmark For Multi-Class Weather Modeling Ramadhany, Sinta Dhea; Yohannes, Ervin
Journal of Informatics and Computer Science (JINACS) Article In Press(1)
Publisher : Universitas Negeri Surabaya

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Abstract— Weather forecasting plays a crucial role in reducing the risks of extreme events that threaten human safety, economic stability, and the environment. Traditional forecasting methods relying on manual observation have developed into modern approaches using satellite, radar, and computational models; however, prediction accuracy remains limited due to the complexity of atmospheric systems and data constraints. In this context, deep learning, particularly Convolutional Neural Networks (CNNs), provides significant potential for automatic weather classification through digital imagery. This study evaluates and compares the performance of four pre-trained CNN architectures VGG16, ResNet50, AlexNet, and InceptionV3 on the Kaggle “Multi-class Weather Dataset,” which contains 860 images categorized into four classes: Cloudy, Shine, Rain, and Sunrise. The methodology involves data augmentation, fine-tuning, and systematic experimentation with various hyperparameters and data split ratios to enhance model generalization. The evaluation metrics applied include accuracy, precision, recall, and F1-score. Experimental results reveal that InceptionV3 outperforms other models, achieving up to 98% training accuracy and 96% validation accuracy due to its effective multi-scale feature extraction and regularization. ResNet50 delivers balanced results with validation accuracy up to 94%, while AlexNet records relatively high detection counts but lower overall performance. In contrast, VGG16 yields the lowest accuracy among the tested models. These findings highlight InceptionV3 as the most robust architecture for weather image classification and emphasize the importance of model selection in balancing prediction accuracy and computational efficiency. The study contributes as a foundation for the development of deep learning-based weather recognition systems that can support early warning applications and disaster risk reduction. Keywords— Convolutional Neural Network, Weather Classification, ResNet50, VGG16, AlexNet, InceptionV3
A COMPARATIVE STUDY OF SUPERVISED FEATURE SELECTION METHODS FOR PREDICTING UANG KULIAH TUNGGAL (UKT) GROUPS Putri, Windy Chikita Cornia; Yustanti, Wiyli; Yohannes, Ervin
J-Icon : Jurnal Komputer dan Informatika Vol 13 No 2 (2025): October 2025
Publisher : Universitas Nusa Cendana

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35508/jicon.v13i2.23893

Abstract

The manual classification of Uang Kuliah Tunggal (UKT) groups at Indonesian public universities is laborious, subjective, and error-prone, especially given the explosion of socio-economic data captured via online admission portals. In this study, we evaluate five feature selection techniques Chi-Square filter, Random Forest importance, Recursive Feature Elimination, LASSO embedded selection, and Exploratory Factor Analysis on a dataset of 9,369 applicants described by 53 socio-economic variables. Six classifiers (Decision Tree, Random Forest, SVM-RBF, K-Nearest Neighbor, and Naïve Bayes) were tuned via stratified 5-fold cross-validation within an 80:20 train-test split. Performance was measured by accuracy, macro-F1, and training time, and differences in weighted-average accuracy across feature-selection scenarios were assessed using the Friedman test (χ² = 15.06, p = 0.010). Results show that reducing to 13 features via LASSO (weighted-average accuracy 0.730) or Chi-Square (0.678) significantly outperforms both the full feature baseline (0.624) and the EFA baseline (0.303), while cutting computational costs by over 40%. We conclude that supervised feature selection particularly LASSO and Chi-Square enables simpler, faster, and more transparent UKT prediction without sacrificing accuracy. The novelty of this study lies in comparing five feature-selection methods within a standardized preprocessing pipeline on real UKT data from UNESA, resulting in a 13-feature subset aligned with the current UKT policy. This finding is ready to be integrated into an automated UKT verification system to enhance decision accuracy and efficiency.
Analysis of the Application of Machine Learning Algorithms for Classification of Toddler Nutritional Status Based on Anthropometric Data Yamasari, Yuni; Yogiyanti, Esti; Yohannes, Ervin
Indonesian Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol. 7 No. 4 (2025): November
Publisher : Jurusan Teknik Elektromedik, Politeknik Kesehatan Kemenkes Surabaya, Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/ijeeemi.v7i4.110

Abstract

The rapid advancement of technology has required appropriate strategies to achieve accurate and optimal results. Among these, machine learning has become one of the most widely applied technologies across various domains, including healthcare, due to its ability to process large volumes of data and produce reliable predictions. One critical health problem that can benefit from these approaches is malnutrition among toddlers, which continues to pose challenges to growth, development, and long-term well-being. This analysis aims to identify the most effective and efficient algorithms for classifying the nutritional status of toddlers based on anthropometric data. The review is grounded in relevant journal articles aligned with the research topic, which serve as the primary sources for synthesis. The selected studies underwent four stages of identification, selection, evaluation, and analysis to ensure both credibility and reliability. The analysis focuses on three main aspects: dataset characteristics, algorithms applied, and outcomes reported. Based on algorithm usage, three implementation strategies were identified: single model, multi-model, and model combination. The overall findings reveal that studies utilizing datasets with fewer than 500 records can effectively apply algorithms such as Random Forest, Decision Tree, and Naïve Bayes Classifier, which consistently achieve accuracy rates above 90%. For datasets exceeding 10,000 records, the XGBoost algorithm is recommended due to its scalability and efficiency in handling large-scale data. For datasets ranging between 500 and 10,000 records, hybrid approaches such as the C4.5 algorithm combined with Particle Swarm Optimization are preferable, with previous studies demonstrating an accuracy of 94.49%. This review highlights that algorithm selection should be adjusted according to dataset size and clinical needs. The findings provide valuable insights to support researchers, practitioners, and policymakers in developing accurate and effective solutions for toddler nutrition assessment
Hybrid Autoencoder Architectures with LSTM and GRU Layers for Bitcoin Price Prediction Yamasari, Yuni; Nafisah, Nurun; Yohannes, Ervin
Indonesian Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol. 7 No. 4 (2025): November
Publisher : Jurusan Teknik Elektromedik, Politeknik Kesehatan Kemenkes Surabaya, Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/ijeeemi.v7i4.132

Abstract

The high volatility of cryptocurrency markets, particularly Bitcoin, poses significant challenges for accurate price forecasting. To address this issue, this study evaluates the performance of four autoencoder-based deep learning architectures: AE-LSTM, AE-GRU, AE-LSTM-GRU, and AE-GRU-LSTM. The models were developed and tested using a univariate approach, where only the closing price was used as input, and two different window sizes (30 and 60) were applied to analyse the effect of historical sequence length on prediction accuracy. Several parameter configurations, including the number of epochs, dropout rate, and learning rate, were explored to determine the optimal model performance. The dataset comprises Bitcoin’s daily closing prices from 2018 to 2025, encompassing diverse market phases, including both bullish and bearish trends. Model performance was assessed using four evaluation metrics: Root Mean Square Error (RMSE), Mean Absolute Error (MAE), the coefficient of determination (R²), and Mean Absolute Percentage Error (MAPE). The results indicate that the AE-LSTM-GRU consistently achieved the best overall performance across all configurations. For a window size of 30, it achieved an RMSE of 1.53067 and a MAPE of 1.98%, while for a window size of 60, the best performance recorded was an RMSE of 1.55217 and a MAPE of 2.09%. The hybrid structure combining LSTM’s capability to capture long-term dependencies with GRU’s efficiency in information decoding demonstrated strong robustness in modelling highly volatile time series. This study contributes to financial time series forecasting by presenting hybrid autoencoder architectures that strike a balance between predictive accuracy and computational efficiency, providing practical insights for researchers and practitioners in financial technology and cryptocurrency analytics