Claim Missing Document
Check
Articles

Found 2 Documents
Search

Bayesian IGARCH Modeling of Jakarta Composite Index Volatility Using Hamiltonian Monte Carlo Algorithm Maulana, Eka Dani; Sumarminingsih, Eni; Nurjannah; Astuti, Ani Budi; Astutik, Suci
Science and Technology Indonesia Vol. 11 No. 1 (2026): January
Publisher : Research Center of Inorganic Materials and Coordination Complexes, FMIPA Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26554/sti.2026.11.1.261-279

Abstract

Time series models that model volatility in financial data, especially in stock market indices such as the Jakarta Composite Index (JCI), are Generalized Autoregressive Conditional Heteroskedasticity (GARCH) models. Following the ratification of the revised Armed Forces Law in March 2025, the JCI experienced increasing volatility, indicating persistent volatility. The problems in the JCI data require a time series model that can capture persistent volatility, namely the Integrated Generalized Autoregressive Conditional Heteroskedasticity (IGARCH) model. Parameter estimation for IGARCH models generally uses the Maximum Likelihood Estimation (MLE) method, which has limitations in handling parameter uncertainty. The Bayesian approach can address parameter uncertainty through the Markov Chain Monte Carlo (MCMC) methods. Among these, Hamiltonian Monte Carlo (HMC) is more efficient than Metropolis-Hastings and Gibbs Sampling, particularly in exploring complex posterior distributions. This study utilizes daily closing price data of the Jakarta Composite Index (JCI) as the main observation variable, observed from April 3, 2023, to April 9, 2025. This study aims to construct a volatility model for the Jakarta Composite Index (JCI) using a Bayesian IGARCH model with an HMC algorithm. This research only uses the IGARCH(1,1) model. The model has a strong ability to capture the JCI’s volatility structure, and its point forecasts are stable. However, credible intervals reveal the uncertainty level, so the volatility of JCI may decrease or increase.
Block Bootstrap for Spatiotemporal Data in Generalized Space Time Autoregressive (GSTAR) Sumarminingsih, Eni; Fitriani, Rahma; Darmanto; Maulana, Eka Dani; Aulia, Natasha; Ruszardi, Luzar Dwain
Science and Technology Indonesia Vol. 11 No. 2 (2026): April
Publisher : Research Center of Inorganic Materials and Coordination Complexes, FMIPA Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26554/sti.2026.11.2.701-731

Abstract

Generalized Space-Time Autoregressive is a model that can be used for data with spatial and temporal dependence. The GSTAR model is widely used in various phenomena such as rainfall, temperature, inflation, and others. GSTAR assumes normality of errors and non-autocorrelation. If the assumption of normality of errors is not met, then inference on parameters cannot be made. One solution to this problem is to use bootstrapping. However, bootstrapping for spatiotemporal data in the GSTAR model has not been developed. Therefore, this study aims to develop a bootstrapping method for spatiotemporal data in the GSTAR model. This development is done by adapting bootstrapping methods for time series data, namely, the non-overlapping block bootstrap (NBB) and the moving block bootstrap (MBB). This research continued with a series of simulations to evaluate the performance of the block bootstrap method as the number of observations, block length, and number of bootstrap replications were varied. Furthermore, the method’s effectiveness was tested using rainfall data from Malang Regency. Simulation results show that both resampling schemes satisfy the asymptotic condition, where the bias decreases monotonically with increasing sample size (T) and block length. MBB consistently produces lower bias than NBB due to its more intensive use of overlapping data, which effectively reduces boundary effects. Although inference on autoregressive parameters can be accurate, inference on spatial autoregressive parameters yields less satisfactory results, indicating the limitations of time blocks in capturing complex spatial dependencies. Increasing the number of replications above B=100 does not significantly improve the precision of the variance estimate, indicating computational efficiency at that threshold. The t-test results confirm that there is no statistically significant difference in performance between NBB and MBB. Nevertheless, MBB is more recommended for practical applications due to its higher information density and better estimation stability.