cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota bogor,
Jawa barat
INDONESIA
FORUM STATISTIKA DAN KOMPUTASI
ISSN : 08538115     EISSN : -     DOI : -
Core Subject : Education,
Forum Statistika dan Komputasi (ISSN:0853-8115) was published scientific papers in the area of statistical science and the applications. It is issued twice in a year. The papers should be research papers with, but not limited to, following topics: experimental design and analysis, survey methods and analysis, operation research, data mining, statistical modeling, computational statistics, time series and econometrics, and statistics education.
Arjuna Subject : -
Articles 119 Documents
MODELLING OF FORECASTING MONTHLY INFLATION BY USING VARIMA AND GSTARIMA MODELS Andi Setiawan; Muhammad Nur Aidi; I Made Sumertajaya
FORUM STATISTIKA DAN KOMPUTASI Vol. 20 No. 2 (2015)
Publisher : FORUM STATISTIKA DAN KOMPUTASI

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (293.317 KB)

Abstract

The model parameters could be different form the well to the factors of time and location. A general model of GSTAR can be used to establish model the inflation in some locations by using GSTARIMA model if time series data is self-contained autoregressive, differentiation, and moving averages. This study examines whether the effect of such locations on the GSTARIMA model is better than the VARIMA model that regardless of the location influences. The aim of this study is to establish two models of inflation six provincial capitals in Java using VARIMA model and GSTARIMA model with inverse distance weighting. Dummy variables have been used to overcome normality and white noise problems. The best forecasting of monthly inflation in provincial captitals in Java Island is GSTAR(1;1) with inverse distance weighting. It has smallest RMSE value of 0.9199.Key words : GSTARIMA, Inverse Distance, RMSE, VARIMA
ALTERNATIVE SEMIPARAMETRIC ESTIMATION FOR NON-NORMALITY IN CENSORED REGRESSION MODEL WITH LARGE NUMBER OF ZERO OBSERVATION Andres Purmalino; Asep Saefuddin; Hari Wijayanto
FORUM STATISTIKA DAN KOMPUTASI Vol. 20 No. 2 (2015)
Publisher : FORUM STATISTIKA DAN KOMPUTASI

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (497.025 KB)

Abstract

A large number of zero observation on the response variable in the socio-economic field are often found in household demand models. This will imply on the method to estimate parameters in the model used. Ordinary least square estimators of linear models to be biased and inconsistent. One model to overcome is using censored regression model is also know as tobit model. However, non-normality in the Tobit Estimators being inconsistent. Another alternative estimators is censor least absolute deviations (CLAD). CLAD estimator is consistent and asymptotically normal for a wide class of distribution. This study was to focus on the application of Tobit and Censored Least Absolute Deviations (CLAD) estimators for LPG demand. The data used is the LPG expenditure in rural areas in the provinces of West Java that the number zero observations is 39 percent of the sample. The result shows that CLAD and Tobit estimators are consistent estimators. But along with increasing the number of samples, the CLAD estimators performance is getting better than Tobit estimators.Keywords : Zero observation, CLAD, Tobit, Consistent estimator, LPG demand
NONLINEAR PRINCIPAL COMPONENT ANALYSIS AND PRINCIPAL COMPONENT ANALYSIS WITH SUCCESSIVE INTERVAL IN K-MEANS CLUSTER ANALYSIS Arista Marlince Tamonob; Asep Saefuddin; Aji Hamim Wigena
FORUM STATISTIKA DAN KOMPUTASI Vol. 20 No. 2 (2015)
Publisher : FORUM STATISTIKA DAN KOMPUTASI

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (577.8 KB)

Abstract

K-Means Cluster is a cluster analysis for continuous variables with the concept of distance used is a euclidean distance where that distance is used as observation variables which are uncorrelated with each other. The case with the type data that is correlated categorical can be solved either by Nonlinear Principal Component Analysis or by making categorical data into numerical data by the method called successive interval and then used Principal Component Analysis. By comparing the ratio of the variance within cluster and between cluster in poverty data of East Nusa Tenggara Province in K-Means cluster obtained that Principal Component Analysis with Successive interval has a smaller variance ratio than Nonlinear Principal Component Analysis. Variables that take effect to the clusterformation are toilet, fuel,and job.Keywords: K-Means Cluster Analysis, Nonlinear Principal Component Analysis, Principal Component Analysis, Successive interval.
A SIMULATION STUDY OF LOGARITHMIC TRANSFORMATION MODEL IN SPATIAL E MPIRICAL BEST LINEAR UNBIASED PREDICTION (SEBLUP) METHOD OF SMALL AREA ESTIMATION Hazan Azhari Zainuddin; Khairil Anwar Notodiputro; Kusman Sadik
FORUM STATISTIKA DAN KOMPUTASI Vol. 20 No. 2 (2015)
Publisher : FORUM STATISTIKA DAN KOMPUTASI

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (754.527 KB)

Abstract

There have been many studies developed to improve the quality of estimates in small area estimation (SAE). The standard method known as EBLUP (Empirical Unbiased Best Linear Predictor) has been developed by incorporating spatial effects into the model. This modification of the method was known SEBLUP (Spatial EBLUP) since it incorporates the spatial correlations which exist among the small areas. The data obtained (variables of concern) usually have a large variance and tend to have a a nonsymmetric distribution and therefore tend to have nonlinear relationship pattern between concomitant variables and variables of concern. the results showed that the method SEBLUP using logarithmic transformation produces estimator more than the other methods.Keywords : EBLUP, SAE, SEBLUP
SURVIVAL ANALYSIS WITH EXTENDED COX MODEL ABOUT DURABILITY DEBTOR EFFORTS ON CREDIT RISK Iwan Kurniawan; Anang Kurnia; Bagus Sartono
FORUM STATISTIKA DAN KOMPUTASI Vol. 20 No. 2 (2015)
Publisher : FORUM STATISTIKA DAN KOMPUTASI

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (641.872 KB)

Abstract

The application of survival analysis on the data of credit motorcycle financing experiencing bad loans after the credit starts early, with sixteen covariates were considered. The model used in survival analysis is the Cox proportional hazard models. Cox models have the assumption that the proportional hazard assumption. Extended Cox models selected to improve cox proportional hazard models when one or more covariates did not meet the assumption of proportional hazards. Extended cox models is an extension of cox models that involve time-dependent variables. Covariates that do not meet the proportional hazards assumption in the Cox models diinteraksikan extended with functions appropriate time, in order to obtain time-dependent covariates. So on the model covariates that are not dependent on time and time dependent covariates. The parameters of these covariates estimated using partial maximum likelihood method. To determine whether the extended Cox model is a suitable model for the data in a particular case, likelihood ratio test was used. The results indicate that extended Cox models with functions time appropriate, provide the best model.Keywords : Credit Risk, Survival Analysis, Cox Proportional Hazard , Extended Cox Model
RIDGE AND LASSO PERFORMANCE IN SPATIAL DATA WITH HETEROGENEITY AND MULTICOLLINEARITY Tiyas Yulita; Asep Saefuddin; Aji Hamim Wigena
FORUM STATISTIKA DAN KOMPUTASI Vol. 20 No. 2 (2015)
Publisher : FORUM STATISTIKA DAN KOMPUTASI

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (660.984 KB)

Abstract

Spatial heterogeneity becomes a separate issue on the analysis of spatial data. GWR (Geographically Weighted Regression) is a statistical technique to explore spatial nonstationarity by form the differrent regression models at different point in observation space. Multicollinearity is a condition that the independent variables in model have linear relationship. It would be a problem for estimation parameters process, because that condition produces unstable model. This problem may be found in GWR models, which allow the linear relationship between independent variables at each location called local multicollinearity. GWRR (Geographically Weighted Ridge Regression) and GWL (Geographically Weighted Lasso) which use the concept of ridge and lasso is shrink the regression coefficient in GWR model. GWRR and GWL techniques are consider to be capable of overcoming local multicollinearity to produce more stable models with lower variance. In this study, GWRR and GWL is used to model Gross Regional Domestic Product (GRDP) in Java using kernel exponential weighted function. The results showed that GWL has better performance to predict GRDP with lower RMSE and higher value than GWRR.Keyword : Spatial Heterogeneity, GWR, Local Multicollinearity, Ridge, Lasso
LAD-LASSO: SIMULATION STUDY OF ROBUST REGRESSION IN HIGH DIMENSIONAL DATA Septian Rahardiantoro; Anang Kurnia
FORUM STATISTIKA DAN KOMPUTASI Vol. 20 No. 2 (2015)
Publisher : FORUM STATISTIKA DAN KOMPUTASI

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (733.502 KB)

Abstract

The common issues in regression, there are a lot of cases in the condition number of predictor variables more than number of observations ( ) called high dimensional data. The classical problem always lies in this case, that is multicolinearity. It would be worse when the datasets subject to heavy-tailed errors or outliers that may appear in the responses and/or the predictors. As this reason, Wang et al in 2007 developed combined methods from Least Absolute Deviation (LAD) regression that is useful for robust regression, and also LASSO that is popular choice for shrinkage estimation and variable selection, becoming LAD-LASSO. Extensive simulation studies demonstrate satisfactory using LAD-LASSO in high dimensional datasets that lies outliers better than using LASSO.Keywords: high dimensional data, LAD-LASSO, robust regression
SMALL AREA ESTIMATION FOR ESTIMATING THE NUMBER OF INFANT MORTALITY USING MIXED EFFECTS ZERO INFLATED POISSON MODEL Arie Anggreyani; _ Indahwati; Anang Kurnia
FORUM STATISTIKA DAN KOMPUTASI Vol. 20 No. 2 (2015)
Publisher : FORUM STATISTIKA DAN KOMPUTASI

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (558.837 KB)

Abstract

Demographic and Health Survey Indonesia (DHSI) is a national designed survey to provide information regarding birth rate, mortality rate, family planning and health. DHSI was conducted by BPS in cooperation with National Population and Family Planning Institution (BKKBN), Indonesia Ministry of Health (KEMENKES) and USAID. Based on the publication of DHSI 2012, the infant mortality rate for a period of five years before survey conducted is 32 for 1000 birth lives. In this paper, Small Area Estimation (SAE) is used to estimate the number of infant mortality in districts of West Java. SAE is a special model of Generalized Linear Mixed Models (GLMM). In this case, the incidence of infant mortality is a Poisson distribution which has equdispersion assumption. The methods to handle overdispersion are binomial negative and quasi-likelihood model. Based on the analysis results, quasi-likelihood model is the best model to overcome overdispersion problem. However, after checking the residual assumptions, still resulted that residuals of model formed two normal distributions. So as to resolve the issue used Mixed Effect Zero Inflated Poisson (ZIP) Model. The basic model of the small area estimation used basic area level model. Mean square error (MSE) which based on bootstrap method is used to measure the accuracy of small area estimates.Keywords : SAE, GLMM, Mixed Effect ZIP Model, Bootstrap
MODEL AVERAGING, AN ALTERNATIVE APPROACH TO MODEL SELECTION IN HIGH DIMENSIONAL DATA ESTIMATION Deiby T. Salaki; Anang Kurnia; Arief Gusnanto; I Wayan Mangku; Bagus Sartono
FORUM STATISTIKA DAN KOMPUTASI Vol. 20 No. 2 (2015)
Publisher : FORUM STATISTIKA DAN KOMPUTASI

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (541.415 KB)

Abstract

Model averaging is an alternative approach to classical model selection in model estimation. The model selection such as forward or stepwise regression, use certain criteria in choosing one best model fitted the data such as AIC and BIC. On the other hand, model averaging estimates one model whose parameters determined by weighted averaging the parameter of each approximation models. Instead of conducting inference and prediction only based one best chosen model, model averaging covering model uncertainty problem by including all possible model in determining prediction model. Some of its developments and applications also challenges will be described in this paper. Frequentist model averaging will be preferential described.Keywords : model selection, frequentist model averaging, high dimensional data

Page 12 of 12 | Total Record : 119