cover
Contact Name
Meiliyani Siringoringo
Contact Email
meiliyanisiringoringo@fmipa.unmul.ac.id
Phone
+6285250326564
Journal Mail Official
eksponensial@fmipa.unmul.ac.id
Editorial Address
Fakultas Matematika dan Ilmu Pengetahuan Alam Universitas Mulawarman Jl. Barong Tongkok, Kampus Gunung Kelua Kota Samarinda, Provinsi Kalimantan Timur 75123
Location
Kota samarinda,
Kalimantan timur
INDONESIA
Eksponensial
Published by Universitas Mulawarman
ISSN : 20857829     EISSN : 27983455     DOI : https://doi.org/10.30872/
Jurnal Eksponensial is a scientific journal that publishes articles of statistics and its application. This journal This journal is intended for researchers and readers who are interested of statistics and its applications.
Articles 205 Documents
Peramalan Produksi Kelapa Sawit Menggunakan Metode Pegel’s Exponential Smoothing Sinaga, Yetty Veronica Lestari; Wahyuningsih, Sri; Siringoringo, Meiliyani
EKSPONENSIAL Vol. 12 No. 2 (2021)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (760.504 KB) | DOI: 10.30872/eksponensial.v12i2.810

Abstract

Time series data analysis using Pegel's exponential smoothing method are an analysis of time series that is influenced by trend and seasonal data patterns. The data used in this study was oil palm production in East Kalimantan Province from January 2014 until December 2018. This study aims to predict oil palm production for January, February, March in 2019. Forecasting results were verified based on the MAPE value and monitoring signal tracking method. The results showed that in the Pegel method, the exponential smoothing model without a multiplicative seasonal trend with a MAPE value of 7.84% had better forecasting accuracy than the other methods. The forecast results of the Pegel's exponential smoothing method without a multiplicative seasonal trend can be used to predict the next 3 periods, namely January, February and March 2019. The forecast results for the next 3 periods have increased in succession.
Penerapan Model Mixed Geographically Weighted Regression dengan Fungsi Pembobot Adaptive Tricube pada IPM 30 Kabupaten/Kota di Provinsi Kalimantan Timur, Kalimantan Tengah dan Kalimantan Selatan Tahun 2016 Safitri, Ranita Nur; Suyitno, Suyitno; Hayati, Memi Nor
EKSPONENSIAL Vol. 11 No. 2 (2020)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (792.631 KB) | DOI: 10.30872/eksponensial.v11i2.651

Abstract

Mixed Geographically Weighted Regression (MGWR) model is a Geographically Weighted Regression (GWR) model which has global (equal value) and local (inequal value) parameters at every different observation location. The goal of this study is to obtain MGWR model of the Human Development Index (HDI) data and find out significant factors influencing the HDI in each district (city) East Kalimantan, Central Kalimantan and South Kalimantan province in 2016. Parameter estimation method is conducted in two stages namely local parameter estimation and global parameter estimation. Local parameter estimation method is Maximum Likelihood Estimation (MLE), with spatial weighting is calculated by adaptive tricube weighting function and optimum bandwidth determination uses the Akaike Information Criteria (AIC). Global parameter estimation method is Ordinary Least Square (OLS). Based on the result of MGWR parameter testing, it was concluded that the school enrollment rates (SMP) and poor people percentage affected the HDI of 30 districts (cities) in East Kalimantan, Central Kalimantan and South Kalimantan. Meanwhile the population density affected the HDI of two districts namely HDI of Samarinda and Bontang.
Peramalan Kebutuhan Bahan Baku Plat Besi Menggunakan Metode Runtun Waktu Autoregressive Integrated Moving Average (ARIMA) dan Meminimumkan Biaya Total Persediaan dari Hasil Peramalan Mengunakan Metode Period Order Quantity (POQ) Mulyta Anggraini; Rito Goejantoro; Yuki Novia Nasution
EKSPONENSIAL Vol 10 No 1 (2019)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (479.075 KB)

Abstract

ARIMA method is used to predict future patterns of data that is expected to approach the actual data. In the case of inventory control, the company must have a good planning system for forecasting results to get maximum benefit. Period Order Quantity Method is used to solve inventory problem and minimize the total inventory cost. The research objective are to predict how many iron plates which CV. Isakutama needs from January 2017 to Desember 2017 with ARIMA method and to minimize the predicted total inventory cost using Period Order Quantity method. Based on the research, the forecasting results of the iron plates for 12 months are 24, 24, 25, 24, 25, 25, 25, 25, 25, 25, 25 and 25 units, so that the total inventory cost is Rp.1,177,264,000 by providing them once every 52 days.
Peramalan Regarima Pada Data Time Series Yudha Muhammad Faishol; Ika Purnamasari; Rito Goejantoro
EKSPONENSIAL Vol 8 No 1 (2017)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (166.802 KB)

Abstract

RegArima method is a modelling technique that combines the ARIMA model with a regression model which uses a dummy variable called regressors or variable regression. The purposes of this study was to determine the calendar variation models and application of the model to predict plane ticket sales in January 2016 - December 2017. Based on the data analysis show that ticket sales have seasonal pattern, ie an increase in ticket sales when Idul Fitri. First determine the regressors which is only affected by one feast day is Eid. Then do the regression model, where the dependent variable (Y) is the volume of plane ticket sales and the independent variable (X) is regressors, so the regression model is Ŷt=1.029+1.335 X. The results of analysis show that all parameters had significant regression model and then do a fit test the model, the obtained residual normal distribution and ineligible white noise, which means that it still contained residual autocorrelation. ARIMA modeling is then performed on the data regression residuals. Results of analysis performed subsequent residual own stationary ARIMA model estimation and obtained ARIMA (0,0,1) with all parameters of the model was already significant and conformance test models had also found and that the residual qualified white noise and residual normal distribution. So the calendar variation model was obtained by the method RegARIMA: Yt = 1.029,5 + 1.337,3 Dt + 0,28712 at-1 + at. Based on the model of those variations could be predicted on plane ticket sales for January 2016-December 2017.
Pemodelan Jumlah Kematian Bayi di Provinsi Nusa Tenggara Timur Tahun 2015 Dengan Regresi Poisson Pratama Yuly Nugraha; Memi Nor Hayati; Desi Yuniarti
EKSPONENSIAL Vol 8 No 2 (2017)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (564.515 KB)

Abstract

Poisson regression is one of the non-linear regression analysis whose response variable is modeled with Poisson distribution. The parameter estimation Poisson regression model using Maximum Likelihood Estimation (MLE). This study aims to model the number of infant mortality in East Nusa Tenggara Province in 2015 and what factors affect the occurrence of cases of infant mortality in East Nusa Tenggara Province using Poisson regression. The results of research with Poisson regression factors influencing the number of infant mortality is the number of deliveries assisted by health personnel (x1), the percentage of pregnant women receiving FE3 tablets (x2), the number of obstetric complications handled (x4), the percentage of low birth weight babies (x5), the number of exclusively breastfed babies (x6), the percentage of households Live clean and healthy (x7), and the number of deliveries is helped by non-medical personnel (x8).
Pengelompokkan Data Runtun Waktu menggunakan Analisis Cluster: Studi Kasus: Nilai Ekspor Komoditi Migas dan Nonmigas Provinsi Kalimantan Timur Periode Januari 2000-Desember 2016 Dani, Andrea Tri Rian; Wahyuningsih, Sri; Rizki, Nanda Arista
EKSPONENSIAL Vol. 11 No. 1 (2020)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (673.538 KB) | DOI: 10.30872/eksponensial.v11i1.642

Abstract

The export value of East Kalimantan Province has big data conditions with time series and multivariable data types. Cluster analysis can be applied to time series data, where there are different procedures and grouping algorithms compared to grouping cross section data. Algorithms and procedures in the cluster formation process are done differently, because time series data is a series of observational data that occur based on a time index in sequence with a fixed time interval. The purpose of this research is to obtain the best similarity measurement using the cophenetic correlation coefficient and get the optimal c-value using the silhouete coefficient. In this study, the grouping algorithm used is a single linkage with four measurements of similarity, namely the Pearson correlation distance, euclidean, dynamic time warping and autocorrelation based distance. The sample in this study is the data on the export value of oil and non-oil commodities in East Kalimantan Province from January 2000 to December 2016 consisting of 10 variables. Based on the results of the analysis, the distance of the best similarity measurement in clustering the export value of oil and non-oil commodities in East Kalimantan Province is the dynamic time warping distance with the optimal c-value of 3 clusters.
Analisis Model Threshold Generalized Autoregressive Conditional Heteroskedasticity (TGARCH) dan Model Exponential Generalized AutoregressiveConditional Heteroskedasticity (EGARCH) Julia Julia; Sri Wahyuningsih; Memi Nor Hayati
EKSPONENSIAL Vol 9 No 2 (2018)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (560.813 KB)

Abstract

In the field of finance, Autoregressive Integrated Moving Average (ARIMA) is one of the models that can be used. Financial data usually have a non constant variance error. Thus, Autoregressive Conditional Heterokedasticity (ARCH )model can be used to solve the problem. In addition, it also can be used the development of ARCH model that is Generalized Autoregressive Conditional Heterkadasticity (GARCH) model. The symmetry of residual data can be determined by using the model of Threshold Generalized Autoregressive Conditional Heterkadasticity (TGARCH) and the model of Exponential Generalized Autoregressive Conditional Heterkadasticity (EGARCH). The purpose of this research is to know the best model among the model of TGARCH and the model of EGARCH in predicting Indonesia Composite Index (ICI) and the results of ICI forecasting by using the best model for the period of July 2017 until December 2017. The best model in the ICI case study from January 2011 to June 2017 is the model of ARIMA (1,1,1) -GARCH (1,2) -EGARCH (1). The results of ICI forecasting by using the model of ARIMA (1,1,1) -GARCH (1,2) -EGARCH (1) obtained an upward trend in the period of July 2017 to December 2017.
Bootstrap Aggregating Multivariate Adaptive Regression Splines Marisa Nanda Rahmaniah; Yuki Novia Nasution; Ika Purnamasari
EKSPONENSIAL Vol 7 No 2 (2016)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (132.495 KB)

Abstract

MARS is one of the classification methods that focus on the high dimension and discontinuity of the data. The level of accuracy in MARS can be improved by using Bagging method (Bootstrap Agregating). This method is used to improve stability, accuracy and strength’s of prediction. This study discusses the MARS bagging applications in analyzing the issue of accreditation, which the accreditation level of a schools can be predicted based on the identifier components. Therefore, in this study will be identified these components to create a classification model. The data used is the accreditation data of the primary school in East Kalimantan Province 2015 issued by the Accreditation Board of the Provincial Schools (BAP-S/M) of East Kalimantan Province. This study obtained six components that affect the determination of the accreditation of schools at primary school level. The components are the variables that contribute to the classification. The variables are a standard component of content (X1), a standard component of the process (X2), a standard component of graduates (X3), standard components of teachers and staffs (X4), a standard component of infrastructure (X5) and standard component of financial (X7). Based on the result of the classification accuracy of MARS method (using Apparent Error Rate (APER), it is amounted to 78.87%, while the classification accuracy (using APER) with method of bagging of the best MARS models amounted to 89.44%. This means that the method of bagging MARS gives better classification accuracy of the classification than MARS.
Perbandingan Algoritma C4.5 Dan Naïve Bayes Untuk Prediksi Ketepatan Waktu Studi Mahasiswa: Studi Kasus: Program Studi Statistika Universitas Mulawarman Permana, Jordan Nata; Goejantoro, Rito; Prangga, Surya
EKSPONENSIAL Vol. 13 No. 2 (2022)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1043.881 KB) | DOI: 10.30872/eksponensial.v13i2.947

Abstract

Classification is a statistical technique that aims to classify data into classes that already have labels by building a model based on training data. There are many methods that can be used in the classification including Naïve Bayes and C4.5. The C4.5 algorithm is an algorithm used to form a decision tree while Naïve Bayes is a classification based on probability. This study aims to determine the results of the classification of C4.5 and Naïve Bayes and to determine the classification accuracy of the two methods. The variables used in this study were graduation status , entrance , gender , regional origin , GPA , and UKT group . After the analysis, the results showed that the average accuracy level of the C4.5 algorithm was 61.99% and the Naïve Bayes accuracy level was 69.97%. So it can be said that the Naïve Bayes method is a better method in classifying student status compared to the C4.5 . method.
Peramalan Jumlah Titik Panas Provinsi Kalimantan Timur Menggunakan Analisis Intervensi Fungsi Pulse Saputra, Ahmad Ronaldy; Wahyuningsih, Sri; Siringoringo, Meiliyani
EKSPONENSIAL Vol. 12 No. 1 (2021)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (720.543 KB) | DOI: 10.30872/eksponensial.v12i1.766

Abstract

Intervention analysis is a time series analysis that used to explain the influence of intervention caused by external and internal factors. As for the number of hotspot in East Borneo which was increased in 2015. The purpose of this study was to determine the best intervention model for forecasting the number of hotspots in East Borneo. In the initial stage of the intervention analysis is to divide the data into 2 parts, namely data before the intervention and data after the intervention occurred. The results of the analysis obtained the best model for the data before the intervention occurred were SARIMA (0,1,1)(0,1,1)12. The next step was identifying the intervention function by observing the residual graph of the SARIMA model and obtained the order b = 0, s = 0 and r = 0 with the AIC value of the intervention model of -143,16. Furthermore, based on the intervention model obtained forecasting results is increased from July to September 2019. The number of hotspots with the highest number of hotspots occurring on September 2019 with 249 hotspots. Then decreasing on October 2019 to 183 hotspots. On November 2019 it dropped significantly to 13 hotspots.

Page 7 of 21 | Total Record : 205