Claim Missing Document
Check
Articles

Found 3 Documents
Search
Journal : Journal of Informatics and Computer Science (JINACS)

Semantic Segmentation Using the U-Net Architecture on Monocular Datasets Ahmad Fikri Hanafi; Ervin Yohannes
Journal of Informatics and Computer Science (JINACS) Vol. 7 No. 01 (2025)
Publisher : Universitas Negeri Surabaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26740/jinacs.v7n01.p37-42

Abstract

Abstract— This study implements a deep learning model based on the U-Net architecture with a pre-trained ResNet50 backbone on ImageNet to solve the task of semantic segmentation on monocular images. The Cityscapes dataset is used as the main benchmark because it provides high-quality data with high resolution that is widely recognized in urban image segmentation research. Experiments were conducted to evaluate the model's performance with varying learning rate values, aiming to understand the model's sensitivity to training parameters. The results show that a learning rate of 1e-4 yields optimal performance, achieving a Mean Intersection over Union (Mean IoU) of 86.59% and pixel accuracy of 97.63%. Visualization of the segmentation predictions demonstrates the model's ability to accurately recognize urban objects and structures, especially under varying lighting conditions and background complexity. These findings confirm the effectiveness of U-Net in image segmentation tasks, as well as the importance of hyperparameter selection and dataset quality in achieving high model performance in the monocular image domain.   Keywords— Convolusional Neural Network, Deep Learning, U-Net, Encoder-Decoder, Semantic Segmentation
Pre-Trained Convolutional Neural Network Benchmark For Multi-Class Weather Modeling Ramadhany, Sinta Dhea; Yohannes, Ervin
Journal of Informatics and Computer Science (JINACS) Vol. 7 No. 02 (2025)
Publisher : Universitas Negeri Surabaya

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Abstract— Weather forecasting plays a crucial role in reducing the risks of extreme events that threaten human safety, economic stability, and the environment. Traditional forecasting methods relying on manual observation have developed into modern approaches using satellite, radar, and computational models; however, prediction accuracy remains limited due to the complexity of atmospheric systems and data constraints. In this context, deep learning, particularly Convolutional Neural Networks (CNNs), provides significant potential for automatic weather classification through digital imagery. This study evaluates and compares the performance of four pre-trained CNN architectures VGG16, ResNet50, AlexNet, and InceptionV3 on the Kaggle “Multi-class Weather Dataset,” which contains 860 images categorized into four classes: Cloudy, Shine, Rain, and Sunrise. The methodology involves data augmentation, fine-tuning, and systematic experimentation with various hyperparameters and data split ratios to enhance model generalization. The evaluation metrics applied include accuracy, precision, recall, and F1-score. Experimental results reveal that InceptionV3 outperforms other models, achieving up to 98% training accuracy and 96% validation accuracy due to its effective multi-scale feature extraction and regularization. ResNet50 delivers balanced results with validation accuracy up to 94%, while AlexNet records relatively high detection counts but lower overall performance. In contrast, VGG16 yields the lowest accuracy among the tested models. These findings highlight InceptionV3 as the most robust architecture for weather image classification and emphasize the importance of model selection in balancing prediction accuracy and computational efficiency. The study contributes as a foundation for the development of deep learning-based weather recognition systems that can support early warning applications and disaster risk reduction. Keywords— Convolutional Neural Network, Weather Classification, ResNet50, VGG16, AlexNet, InceptionV3
Comparative Analysis of Traditional Machine Learning Models (SVM, KNN, and Linear Regression) for KSE 100 Stock Price Forecasting Febriansyah, Aldin; Ervin Yohannes
Journal of Informatics and Computer Science (JINACS) Vol. 7 No. 02 (2025)
Publisher : Universitas Negeri Surabaya

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Abstract—The erratic volatility of stock prices presents a significant challenge for analysts and investors when making informed investment decisions. Although the Efficient Market Hypothesis suggests that price prediction is theoretically impossible, numerous studies indicate that predictive models can yield high-quality results. This research compares the effectiveness of three traditional machine learning algorithms—Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Linear Regression (LR)—in forecasting the daily stock prices of the KSE 100 Index from the Pakistan Stock Exchange (PSX). The study utilized 3,221 daily closing prices recorded between February 22, 2008, and February 23, 2021. The models were implemented in Python and optimized through hyperparameter tuning using GridSearchCV. To ensure robust evaluation, five distinct data-splitting techniques were employed: a chronological split of 2020 and proportional splits of 80:20, 75:25, and 70:30. Performance was measured using MSE, RMSE, MAE, MAPE, and Accuracy metrics. The findings reveal that Linear Regression (LR) consistently delivered the best and most stable performance across all testing schemes. LR achieved its highest accuracy of 97.9% and lowest error (MSE 0.000404) in the 70:30 split, while maintaining a 97.3% accuracy in the 2020 test data. In contrast, KNN was the most sensitive model, with accuracy dropping to 92.2% in the 30% test scheme. These results underscore that LR is the most accurate and dependable option for stock price time-series prediction among these traditional models, proving that simpler models can remain highly competitive. Keywords— Stock Price Forecasting, Machine Learning, Linear Regression (LR), Support Vector Machine (SVM), K-Nearest Neighbors (KNN).