cover
Contact Name
Meiliyani Siringoringo
Contact Email
meiliyanisiringoringo@fmipa.unmul.ac.id
Phone
+6285250326564
Journal Mail Official
eksponensial@fmipa.unmul.ac.id
Editorial Address
Fakultas Matematika dan Ilmu Pengetahuan Alam Universitas Mulawarman Jl. Barong Tongkok, Kampus Gunung Kelua Kota Samarinda, Provinsi Kalimantan Timur 75123
Location
Kota samarinda,
Kalimantan timur
INDONESIA
Eksponensial
Published by Universitas Mulawarman
ISSN : 20857829     EISSN : 27983455     DOI : https://doi.org/10.30872/
Jurnal Eksponensial is a scientific journal that publishes articles of statistics and its application. This journal This journal is intended for researchers and readers who are interested of statistics and its applications.
Articles 12 Documents
Search results for , issue "Vol 8 No 1 (2017)" : 12 Documents clear
Analisis Faktor Konfirmatori untuk Mengetahui Faktor-Faktor yang Mempengaruhi Prestasi Mahasiswa Program Studi Statistika FMIPA Universitas Mulawarman Andini Juita Sari; Desi Yuniarti; Sri Wahyuningsih
EKSPONENSIAL Vol 8 No 1 (2017)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (263.804 KB)

Abstract

Confirmatory factor analysis is one part of the multivariate analysis. In this study conducted a confirmatory factor analysis of statistics student of Mulawarman University in 2013, 2014, and 2015 of 159 with research aims to determine the factors affecting the achievement of students. The analysis showed that, is influenced by four latent variables are latent variables background (ξ1) with three indicator variables of the relation with family (X1), parental (X2), and the motivation of the family (X3). Latent variables learning environment outside the campus (ξ2) with two indicator variables are the concentrations studied (X6) and the completion of tasks (X7). Latent variables campus facilities (ξ3) with indicator variables study room (X8), reading room of statistics (X9), wifi (X10), and computer facilities laboratory (X11). Latent variable students perceptions of lecturers (ξ4) with two indicator variables the learning system of lecturers (X14) and system administration duties of lecturers (X15). Indicator variables give large contribute affect to student achievement is the completion of the task (X7) rated loading factor of 0.89.
Penggunaan Metode Seven New Quality Tools dan Metode DMAIC Six Sigma Pada Penerapan Pengendalian Kualitas Produk Yurin Febria Suci; Yuki Novia Nasution; Nanda Arista Rizki
EKSPONENSIAL Vol 8 No 1 (2017)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (243.816 KB)

Abstract

Product quality control is a technique and activities or planned actions undertaken to achieve, maintain, and improve the quality of products and services to meet with customers standards and satisfaction. This study aim to address the product quality at a company using statistical methods of products control. The methods are Seven New Quality Tools and DMAIC Six Sigma which are used on a product with a brand of Roti Durian Panglima, produced by PT. Panglima Roqiiqu Group in June 2016. Based on the result by using Seven New Quality Tools method, there are five factors that caused defect on Roti Durian Panglima product, which are : human factor, materials, environmental, machine, and work method, which makes the priority of the product improvement lays on human factor. Meanwhile, the use of DMAIC Six Sigma method has obtained performance baseline values at 4,48 Sigma with four kinds of defects on Roti Durian Panglima products, and based on improvement phase using PFMEA method, the priority on product improvement also lays on human factor.
Analisis Autokorelasi Spasialtitik Panas Di Kalimantan Timur Menggunakan Indeks Moran dan Local Indicator Of Spatial Autocorrelation (LISA) Nurmalia Purwita Yuriantari; Memi Nor Hayati; Sri Wahyuningsih
EKSPONENSIAL Vol 8 No 1 (2017)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (773.151 KB)

Abstract

In the last few decades has developed statistical methods relating to spatial science, is the spatial statistics. Spatial Statistics aims to analyze spatial data. The case studies in this study was the amount of hotspots in East Kalimantan by Regency/City in years 2014-2016. This study aimed to analyze the existence of spatial autocorrelation in the data the amount of hotspots as well as determine the level of vulnerability to potential areas of forest and land fires in East Kalimantan by Regency/City in 2014-2016. The method used to analyze the global spatial autocorrelation is the Moran Index method and Local Indicators of Spatial Autocorrelation (LISA) for analyze spatialautocorrelation locally. The results of the analysis of global spatial autocorrelation using the Moran index with α = 20% showed there spatial autocorrelation amount of hotspots in East Kalimantan in 2014, 2015, and 2016. Meanwhile, the analysis results locally using LISA showed that there spatial autocorrelation in several Regency/City in East Kalimantan in 2014, 2015 and 2016. The analysis results Regency/City that belong to the vulnerable category of forest and land fires is Bontang City, Kutai Barat Regency, Kutai Kartanegara Regency, Mahakam Ulu Regency, dan Penajam Paser Utara Regency and Samarinda City.
Peramalan Regarima Pada Data Time Series Yudha Muhammad Faishol; Ika Purnamasari; Rito Goejantoro
EKSPONENSIAL Vol 8 No 1 (2017)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (166.802 KB)

Abstract

RegArima method is a modelling technique that combines the ARIMA model with a regression model which uses a dummy variable called regressors or variable regression. The purposes of this study was to determine the calendar variation models and application of the model to predict plane ticket sales in January 2016 - December 2017. Based on the data analysis show that ticket sales have seasonal pattern, ie an increase in ticket sales when Idul Fitri. First determine the regressors which is only affected by one feast day is Eid. Then do the regression model, where the dependent variable (Y) is the volume of plane ticket sales and the independent variable (X) is regressors, so the regression model is Ŷt=1.029+1.335 X. The results of analysis show that all parameters had significant regression model and then do a fit test the model, the obtained residual normal distribution and ineligible white noise, which means that it still contained residual autocorrelation. ARIMA modeling is then performed on the data regression residuals. Results of analysis performed subsequent residual own stationary ARIMA model estimation and obtained ARIMA (0,0,1) with all parameters of the model was already significant and conformance test models had also found and that the residual qualified white noise and residual normal distribution. So the calendar variation model was obtained by the method RegARIMA: Yt = 1.029,5 + 1.337,3 Dt + 0,28712 at-1 + at. Based on the model of those variations could be predicted on plane ticket sales for January 2016-December 2017.
Proses Optimasi Masalah Penugasan One-Objective dan Two-Objective Menggunakan Metode Hungarian Diang Dewi Tamimi; Ika Purnamasari; Wasono Wasono
EKSPONENSIAL Vol 8 No 1 (2017)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (324.556 KB)

Abstract

Assignment problem is a situation where m workers are assigned to complete n tasks/jobs to minimize costs and time or maximize profits and quality by setting the proper task to each worker. Many researches have been focused to solve assignment problem, but most of them only consider one-objective such as minimizing the cost of operation. Two-objectiveassignment problem is the assignment problem that has two objectives optimization of some of the resources owned by each worker to complete every task/job which are cost and time for this case. Case in this research use primary data drawn from the interviews of Rattan furniture craftman in Rotan Sejati store, Samarinda. This research will optimize the one-objective and two-objective assignment problem by using Hungarian Method. The analysis result revealed that the optimization proccess of one-objective assignment problem only considering operation cost is Rp. 2.950.000,- with total time is 63 days. The optimization proccess of one-objective assignment problem only considering operation time is Rp. 3.290.000,- with total time is 52 days. The optimization proccess of one-objective assignment problem only considering quality is Rp. 3.550.000,- with total time is 59 days. The optimization proccess of two-objective assignment problem only considering operation cost and operation time is Rp. 3.170.000,- with total time is 52 days. The optimization proccess of two-objective assignment problem only considering operation cost and quality is Rp. 3.380.000,- with total time is 61 days. The optimization proccess of two-objective assignment problem only considering operation time and quality is Rp. 3.350.000,- with total time is 59 days.
Aplikasi Data Mining Market Basket Analysis untuk Menemukan Pola Pembelian di Toko Metro Utama Balikpapan Nadya Rahmawati; Yuki Novia Nasution; Fidia Deny Tisna Amijaya
EKSPONENSIAL Vol 8 No 1 (2017)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (179.701 KB)

Abstract

The development of information technology in the transaction process in supermarkets compete to improve the quality and utility in order to achieve dissemination of information easily and quickly which is accurate and effective. This situation encourages the development of techniques that automatically find the relationship between item in the database. This study aims to analyzing and knowing association rules formed by using apriori algorithm. Market basket analysis’s steps are doing descriptive analysis, grouping the data transactions, applying apriori algorithm on the data, calculating the value of support and calculating the value of confidence. With the value of the minimum support 10% and minimum value of confidence 40%, the results obtained are one rule of association on the first day, four rules of association on the second day, one rule of association on the third day, four rules of association on the fourth day, six rules of association on the fifth day, nine rules of association on the sixth day, and four rules of association on the seventh day.
Perbandingan Metode Bootstrap Dan Jackknife Resampling Dalam Menentukan Nilai Estimasi Dan Interval Konfidensi Parameter Regresi Dessy Ariani; Yuki Novia Nasution; Desi Yuniarti
EKSPONENSIAL Vol 8 No 1 (2017)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (549.691 KB)

Abstract

Regression analysis is a study that describes and evaluates the relationship between an independent variable and the dependent variable for the purpose of estimating or predicting the value of the dependent variable based on the value of the independent variables. Resampling is used when samples obtained for analyzing is less. In this study, Bootstrap method and Jackknife method are using. Both methods are used to find the value of regression parameter estimates and confidence intervals of regression parameter values which applied to the data position of Public Deposits in four groups of banks : Persero Banks, Government Banks, National Private Banks and Foreign Banks to knowing the best resampling methods to find the value of regression parameter estimates and confidence intervals of regression parameter values. There are three independent variables which are used in this study, namely investments loans, working capital loans and consumer loans. From the research results, it is obtained that the Jackknife method is the most appropriate method because it has smaller standard error values so Jackknife methods have a narrow range confidence intervals.
Penggunaan Metode Kaizen Pada Tahap Improve Dalam Six Sigma Yuliana Yuliana; Yuki Novia Nasution; Wasono Wasono
EKSPONENSIAL Vol 8 No 1 (2017)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (177.109 KB)

Abstract

Six sigma is a holistic approach to solve the cause of disabilityproductsproblems and improve processes through the DMAIC (Define, Measure, Analyze, Improve, Control). Analyze the causes of product defect using the proposed improvement of Kaizen that is Five-M Checklist, 5W+1H (What, Why, Where, When, When, Who, How), and Five Step Plans. Obtained a better quality thereby creating customer satisfaction. The purpose of this study were to determine the value of Defect Per Million Opportunities (DPMO), Critical To Quality (CTQ) products, and know the process of production of bottled water brand RAMA volume 220ml. The result showed DPMO value 45.808. The level of the company be at 3,186 sigma with Critical To Quality (CTQ) is lid at 41,3%, water volume at 27,1%, and glass at 25%. The p-chart is used before and after improvement in this study to analyze the number of defective product. The result showed that before the repair using analysis of Kaizen, there is a lot of data out of the control limits, whereas after repair using analysis of Kaizen there is no data out of the control limits and some data products were near the centerline of the control p-chart.
Analisis Cluster Non-Hirarki Dengan Menggunakan Metode K-Modes pada Mahasiswa Program Studi Statistika Angkatan 2015 FMIPA Universitas Mulawarman Nur Amah; Sri Wahyuningsih; Fidia Deny Tisna Amijaya
EKSPONENSIAL Vol 8 No 1 (2017)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (442.182 KB)

Abstract

Cluster analysis is a technique that used to categorize or classify object into clusters or group which is relatively homogeneous. This research aims to know the number of the best cluster used in the selection of Statistics major using K-Modes Cluster, which variable as the best center of cluster & the most optimum, and also comparison of the cluster based on the Davies-Bouldin Index (DBI) which is derived in each cluster are 2 clusters, 3 clusters, and 4 clusters. Steps in this research is descriptive analysis, validity and reliability of questionnaire, determine the number of clusters, compute the dissimiliarity distance, calculate the cluster validation and interpretate the result of the best cluster. Selection of the best cluster use the smallest value comparison. The smallest of the two clusters are 0,599. The center (centroid) of clusters variables which is the best optimum using K-Modes with two clusters are for the first centroid is the first choice of major, SNMPTN, IPK satisfactory, study routines for 4 times a week, and the average length of study is between 60 minutes to 120 minutes per day.; for the second centroid is the first choice of study program, SNMPTN, IPK is very satisfied, study routines for 6 times a week, and the average length of study is less than or equal to 60 minutes per day. The final results showed that the best cluster produced is two clusters where cluster 1 consisted of 37 students and cluster 2 consisted of 8 students.
Peramalan Menggunakan Metode Fuzzy Time Series Cheng Sumartini Sumartini; Memi Nor Hayati; Sri Wahyuningsih
EKSPONENSIAL Vol 8 No 1 (2017)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (513.357 KB)

Abstract

Forecasting process play an important role in time series data as required for decision-making process. Fuzzy Time Series (FTS) is a concept known as artificial intelligence which use to predict a problem where the actual data was formed in the values ​​of linguistic. This study discusses the FTS method developed by Cheng to forecast the Composite Stock Price Index (CSPI) in October 2016. Within FTS, long intervals determined in beginning process. Based on FTS Cheng method with interval determination using frequency distribution, forecasting stock index based on data from January 2011-September 2016 result forecast for the month of October 2016 was 5.367.98 points. Based on calculation of MAPE, CSPI data from January 2011-September 2016 had an error value as big as 2.56% and has an accuracy of forecasting results amounted to 97.44%. Forecasting use the FTS Cheng has a great performance because it has MAPE value below 10%.

Page 1 of 2 | Total Record : 12