Ardi Pujiyanta
Program Studi Teknik Informatika Fakultas Teknologi Industri Universitas Ahmad Dahlan Yogyakarta Jl. Prof. Dr. Soepomo, S.H., Warungboto, Janturan, Yogyakarta 55164 Telp : (0274) 563515 Ext. 3208

Published : 46 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 3 Documents
Search
Journal : International Journal of Advances in Intelligent Informatics

Resource allocation model for grid computing environment Ardi Pujiyanta; Lukito Edi Nugroho; Widyawan Widyawan
International Journal of Advances in Intelligent Informatics Vol 6, No 2 (2020): July 2020
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v6i2.496

Abstract

Grid computing is a collection of heterogeneous resources that is highly dynamic and unpredictable. It is typically used for solving scientific or technical problems that require a large number of computer processing cycles or access to substantial amounts of data. Various resource allocation strategies have been used to make resource use more productive, with subsequent distributed environmental performance increases. The user sends a job by providing a predetermined time limit for running that job. Then, the scheduler gives priority to work according to the request and scheduling policy and places it in the waiting queue. When the resource is released, the scheduler selects the job from the waiting queue with a specific algorithm. Requests will be rejected if the required resources are not available. The user can re-submit a new request by modifying the parameter until available resources can be found. Eventually, there is a decrease in idle resources between work and resource utilization, and the waiting time will increase. An effective scheduling policy is required to improve resource use and reduce waiting times. In this paper, the FCFS-LRH method is proposed, where jobs received will be sorted by arrival time, execution time, and the number of resources needed. After the sorting process, the work will be placed in a logical view, and the job will be sent to the actual resource when it executes. The experimental results show that the proposed model can increase resource utilization by 1.34% and reduce waiting time by 20.47% when compared to existing approaches. This finding could be beneficially implemented in cloud systems resource allocation management.
Job scheduling reservations on cloud resources Pujiyanta, Ardi; Noviyanto, Fiftin
International Journal of Advances in Intelligent Informatics Vol 10, No 3 (2024): August 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i3.1421

Abstract

The current application of cloud computing focuses more on research problems. One of the main problems in the cloud is job allocation. Jobs are dynamically allocated to server processors. All cloud virtualized hardware is available to users on demand and can be dynamically upgraded. Resource scheduling is critical in research in the cloud, due to its large execution time and resource costs. The differences in resource scheduling criteria and parameters used cause various categories of Resource Scheduling Algorithms. Resource scheduling has a goal, identifying the right resources to schedule workloads in a timely manner and improving the effectiveness of resource utilization. In other words, minimizing workload completion time. Mapping the right workloads to resources will result in good scheduling. Another goal of resource scheduling is to identify adequate and appropriate workloads. So it can support scheduling of multiple workloads, to meet various QoS needs in cloud computing. The aim of this research is to determine the value of waiting time, idle time and makespan on cloud resources. The proposed method is to sort the arrival times of jobs with the least workload and place the jobs on a virtual view, before scheduling them on cloud resources. Experimental results show that the proposed method has an idle time of 25.3%, FCFS is 43.1% while for bacfilling it is 31.5%. The average makespan reduction for FCFS is 16.73%, for bacfilling it is 12.87%. The average decrease in AWT for FCFS was 13.3% for bacfilling of 12.03%. The results of this research can be applied to cloud rentals with flexible times.
Synergistic preprocessing approaches for improved time series analysis Pranolo, Andri; Pujiyanta, Ardi; Supriyanto, Supriyanto
International Journal of Advances in Intelligent Informatics Vol 12, No 1 (2026): February 2026
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v12i1.2321

Abstract

This paper systematically evaluates the performance of an LSTM baseline model, along with four smoothing augmentation methods (Kalman, Laplace, Moving Average, Savitzky-Golay), under different normalization strategies (Min-Max and Z-Score) for multivariate time-series forecasting. Experiments were conducted on six publicly available datasets (electricity consumption, energy consumption, sensor data, household energy, Indian electricity, and Brazilian temperature), and model performance was comprehensively compared using three metrics: MAPE, RMSE, and R². Results indicate that Laplace smoothing achieved the best performance across five datasets, effectively reducing errors while maintaining high fit quality, demonstrating its advantage in handling highly volatile and noisy time-series data. However, in some instances, Laplace smoothing, along with MA and SG methods, may produce an “over-smoothing” effect, causing forecasts to lose sensitivity to spike fluctuations. The choice of normalization strategy is equally critical: Min-Max is more suitable for data with stable distributions, while Z-Score demonstrates greater advantages for data with large numerical ranges and significant volatility. Notably, in temperature datasets with small sample sizes and high volatility, complex smoothing methods actually degraded performance, making the baseline LSTM + Z-Score the optimal choice. However, the LSTM-Laplace model with Min-Max normalization achieves the best performance among the models. Overall, the study concludes that improving prediction performance relies not only on model architecture but also on optimizing data scale, distribution characteristics, and preprocessing strategies.