Claim Missing Document
Check
Articles

COLLABORATIVE WRITING: ANALYZING STUDENT’S RESPONSES IN WRITING ACTION RESEARCH PROPOSAL Eni Dwi Lestariningsih; Suparti Suparti; Testiana Deni Wijayatiningsih; Dwi Ampuni Agustina
PROSIDING SEMINAR NASIONAL & INTERNASIONAL 2018: PROCEEDING 1ST INSELIDEA INTERNATIONAL SEMINAR ON EDUCATION AND DEVELOPMENT OF ASIA (INseIDEA)
Publisher : Universitas Muhammadiyah Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (492.052 KB)

Abstract

This study aims to analyze the student’s responses in writing action research proposals. The method  of this research is the descriptive quantitative.  The subject of this study was UT Semarang students 2018.1 in Pokjar Kaliwungu Kendal who took Action Research courses. The data collection technique used questionnaires by giving a set of questions or written statements to respondents to answer them. The results of the student’ s responses analysis before and after the Colaborative Writing model application showed that there was an increase in the average student’ s activeness because they were more enthusiastic in writing Classroom Action Research proposals by collaborating in writing with groups according to the developed teaching materials. Their tendency to more easily determine the topic of research using the Collaborative Writing model.Keywords: analyzing, writing research, collaborative writing, student’s responses
Media Alternatif Campuran Daun Pisang Kering dan Kulit Jagung untuk Meningkatkan Produktivitas Jamur Merang (Volvariella volvacea (Bull) Singer.) dalam Keranjang Suparti Suparti; Wardani Ana Safitri
Bioeksperimen: Jurnal Penelitian Biologi Vol 6, No 1: March 2020
Publisher : Universitas Muhammadiyah Surakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.23917/bioeksperimen.v6i1.10435

Abstract

PREDIKSI SIMPANAN BERJANGKA PADA BANK UMUM DAN BPR MENGGUNAKAN METODE ARIMA DENGAN OUTLIERS DAN ARIMA BOOTSTRAP Shinta Karunia Permata Sari; Rukun Santoso; Suparti Suparti
Jurnal Gaussian Vol 6, No 3 (2017): Jurnal Gaussian
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (682.708 KB) | DOI: 10.14710/j.gauss.v6i3.19349

Abstract

Time deposits or often referred to as deposits  are deposits that take it in accordance with the time agreed. The position of time deposits in commercial banks and BPRs is monitored by Bank Indonesia, Because large time deposits affect the level of the economy in Indonesia, one of them to facilitate public credit in an opening and building businesses. However, in the course of this term deposit data position is influenced by many other factors that resulted in the existence of the data of this condition leads to the assumption of normality becomes unfulfilled. Some methods that can be used to overcome this problem include ARIMA Box-Jenkins with outliers detection and Bootstrapping ARIMA. In this case,  the data is public time deposits at commercial banks and BPR from January 2010 to April 2016. The best ARIMA model is ARIMA (1,1,0), With the best method is ARIMA Bootstrap because it has MAPE value (out sample) of 4.8257% less than MAPE value’s ARIMA with outliers detection which it has 6.1610%. Based on these results it is concluded that in this case the nonparametric method is more appropriate to be used by ignoring the distribution assumption. Keywords : Deposits, ARIMA, Outliers detection, Bootstrap ARIMA
ANALISIS NILAI RISIKO (VALUE AT RISK) MENGGUNAKAN UJI KEJADIAN BERNOULLI (BERNOULLI COVERAGE TEST) (Studi Kasus pada Indeks Harga Saham Gabungan) Iwan Ali Sofwan; Agus Rusgiyono; Suparti Suparti
Jurnal Gaussian Vol 3, No 2 (2014): Jurnal Gaussian
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (470.078 KB) | DOI: 10.14710/j.gauss.v3i2.5912

Abstract

Risk management is a systematic procedure to decrease the risk of an asset. Risk must be calculated in order to determine the best strategy in investing. Value at Risk (VaR) is a measure of risk that can be used. VaR measures the worst loss that can be happen in the future at a certain confidence level. There are many method to compute VaR. However, the methods are useful if it can predict future risks accurately. Therefore, the methods should be evaluate with a backtesting procedure. This research analyze the two methods of computing VaR, Historical Simulation and Johnson  transformation approach, that estimate the risk of Jakarta Composite Index and backtest the methods use Bernoulli Coverage Test. The result, if using the relative VaR to forecast the risk of Jakarta Composite Index, the historical simulation approach can be used if the expected probability of violation is . Whereas the  Johnson  transformation approach can be used if the expected probability of violation is . If using the absolute VaR to forecast the risk of Jakarta Composite Index, the historical simulation approach can be used if the expected probability of violation is . Whereas the  Johnson  transformation approach can be used if the expected probability of violation is .
ANALISIS DATA RUNTUN WAKTU MENGGUNAKAN METODE WAVELET THRESHOLDING Yudi Ari Wibowo; Suparti Suparti; Tarno Tarno
Jurnal Gaussian Vol 1, No 1 (2012): Jurnal Gaussian
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (678.49 KB) | DOI: 10.14710/j.gauss.v1i1.918

Abstract

Latterly, wavelet is used in various application of statistics. Wavelet is a method without parameter which used in signal analysis, data compression, and time series analysis. Wavelet thresholding is a method which reconstructing the largest number of wavelet coefficients. Only the coefficients are greater than a specified value which taken and the rest coefficients are ignored, because considered null. Certain value is called the threshold value. The level of smoothness estimation are determined by some factor such as wavelet functions, the type of thresholding functions, level of resolutions and threshold parameters. But most dominant factor is threshold parameter. Because that was required to select the optimal threshold value. At the simulation study was analyzing of the stasioner, nonstasioner and nonlinier data. Wavelet thresholding method gives the value of Mean Square Error (MSE) which is smaller than the ARIMA. Wavelet thresholding is considered quite so well in the analysis of time series data.
PEMODELAN KURS RUPIAH TERHADAP DOLLAR AMERIKA SERIKAT MENGGUNAKAN REGRESI PENALIZED SPLINE BERBASIS RADIAL Kartikaningtiyas Hanunggraheni Saputri; Suparti Suparti; Abdul Hoyyi
Jurnal Gaussian Vol 4, No 3 (2015): Jurnal Gaussian
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (480.219 KB) | DOI: 10.14710/j.gauss.v4i3.9477

Abstract

Exchange rate is the price of a currency from a country that is measured or expressed in another country's currency. A country's currency exchange rate has fluctuated due to exchange rate determined by the demand and supply of the currency. One of  method that can be used to predict the exchange rate is the classical time series analysis (parametric). However, the data exchange rate that fluctuates often do not fulfill the parametric assumptions. Alternative used in this research is penalized spline regression which is nonparametric regression and not related to the assumption of regression curves. Penalized spline regression is obtained by minimizing the function Penalized Least Square (PLS). To handle the numerical instability and changing data then used radial basis at Penalized spline estimator. Selection of the optimal models is rely heavily on determining the optimal lambda and optimal knot point that is based on the Generalized Cross Validation (GCV) minimum. Using data daily exchange rate of the rupiah against the US dollar in the period of June 2, 2014 until February 27, 2015, the optimal penalized spline  bases on radial model in this study is when using 2 order  and 13 knots point, those points are 11625; 11669; 11728; 11795; 11911; 11974; 12069; 12118; 12161; 12372; 12452; 12550; 12667 with GCV = 3904.8.Keywords: exchange rate, penalized spline, radial bases, penalized least square,    generalized cross validation
PEMODELAN REGRESI SPLINE TRUNCATED UNTUK DATA LONGITUDINAL ( Studi Kasus : Harga Saham Bulanan pada Kelompok Saham Perbankan Periode Januari 2009 – Desember 2015 ) Khoirunnisa Nur Fadhilah; Suparti Suparti; Tarno Tarno
Jurnal Gaussian Vol 5, No 3 (2016): Jurnal Gaussian
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (700.706 KB) | DOI: 10.14710/j.gauss.v5i3.14699

Abstract

Stocks are securities that can be bought and sold by individuals or institutions as a sign of ownership of any person nor bussines entity within a company. From the value of market capitalization, the stock is divided into 3 groups: large capitalization (big-cap), medium capitalization (mid-cap), and small capitalization (small-cap). The stocks has been fluctuated up and down because of several factors, one of them is inflation. Longitudinal data are observations made of n subjects that mutually independent with each subject which observed repeatedly in different period of time mutually dependent. Modelling longitudinal data of stock prices do with truncated spline nonparametric regression approach. The best model of spline depends on the determination of the optimal knot points which has minimum value of Generalized Cross Validation (GCV). The best of truncated spline regression is spline order 2 with 3 knot points for each of the subjects on longitudinal data. By using the model, the value of MAPE for each subject is 29,93% for PT Bank Mandiri (Persero) Tbk., 16,67% for PT Bank Bukopin Tbk., and 12,99% for PT Bank Bumi Arta Tbk.. Keywords: stocks, longitudinal data, truncated spline, GCV
PEMODELAN INDEKS HARGA SAHAM GABUNGAN (IHSG) MENGGUNAKAN MULTIVARIATE ADAPTIVE REGRESSION SPLINES (MARS) Ndaru Dian Darmawanti; Suparti Suparti; Diah Safitri
Jurnal Gaussian Vol 3, No 4 (2014): Jurnal Gaussian
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (533.391 KB) | DOI: 10.14710/j.gauss.v3i4.8088

Abstract

Composite Stock Price Index (CSPI) is a historical information about the movement of joint-stock until a certain date. CSPI is often used by inventors to see a representation of the overall stock price, it can analyze the possibility of increase or decrease in stock price. Following old examination, some economy macro variables affecting CSPI is inflation, interest rate,and exchange rate the Rupiah againts the u.s.dollar. MARS method is particularly suitable to analyze a CSPI because many variables that affected. Furthermore, in the real world is very difficult to find a spesific data pattern. The analysis is MARS analysis. The purpose is an obtained a MARS model to be used to analyze the CSPI movement’s. Selection MARS model can be used CV method. The MARS model is an obtained from combination of BF, MI, dan MO. In this case, happens the best models with BF=9, MI=2, dan MO=1. Accuracy for MARS model can see MAPE values is 14,32588% it means the model can be used.Keyword: CSPI, economy macro, MARS, CV, MAPE.
PROYEKSI DATA PRODUK DOMESTIK BRUTO (PDB) DAN FOREIGN DIRECT INVESTMENT (FDI) MENGGUNAKAN VECTOR AUTOREGRESSIVE (VAR) Indra Satria; Hasbi Yasin; Suparti Suparti
Jurnal Gaussian Vol 4, No 4 (2015): Jurnal Gaussian
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (661.248 KB) | DOI: 10.14710/j.gauss.v4i4.10224

Abstract

Gross Domestic Product (GDP) and Foreign Direct Investment (FDI) is an economic instrument that has an attachment and often used for economic development of a country. To predict these two variables there are several methods that can be used, one of which is a method of Vector Autoregressive (VAR). VAR method has some assumptions that the data to be foreseen must have an attachment, stationary in the mean and variance and the resulting error must meet the test of independence and normal distribution. In the early stages of identification done by considering the value of AIC as a determinant of the optimal lag value, which in this case lag 4 who came out as the optimal lag. Granger causality test as an attachment test between variable and Augmented Dickey Fuller test (ADF) as a stationary test. In the parameter estimation phase used Ordinary Least Square method (OLS) to determine the values of the parameters to be used as a model. After getting the model it is necessary to do verification on condition that the residuals must comply with the independence test and multivariate normal test. With a second fulfillment verification test is carried out projections for the next 5 years with a value of R-Square 64% to GDP and 48% for the variable FDI Keywords: FDI, GDP, VAR, causality, independency, multivariate normal, R-Square
KETEPATAN KLASIFIKASI PEMILIHAN METODE KONTRASEPSI DI KOTA SEMARANG MENGGUNAKAN BOOSTSTRAP AGGREGATTING REGRESI LOGISTIK MULTINOMIAL Ahmad Reza Aditya; Suparti Suparti; Sudarno Sudarno
Jurnal Gaussian Vol 4, No 1 (2015): Jurnal Gaussian
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (424.511 KB) | DOI: 10.14710/j.gauss.v4i1.8099

Abstract

Classification is one of the statistical methods in grouping the data compiled systematically. Classification problem rises when there are a number of measures that consists of one or several categories that can not be identified directly but must use a measure. classification methods commonly used in studies to analyze a problem or event is logistic regression analysis. However, this classification method provides unstable parameter estimation. So to obtain a stable parameter multinomial logistic regression model used bootstrap approach that is bootstrap aggregating (bagging). The purpose of this study was to compare the accuracy of the classification multinomial logistic regression models and bootstrap aggragatting model using the data of family planning in Semarang. From the results of bagging multinomial logistic regression obtained classification accuracy in replication bootstrap most 50 times at 51%, this model is able to decrease the classification error of up to 2% compared to the multinomial logistic regression model with a classification accuracy of 49%.Keywords: logistic regression, bootstrap aggregating, accuracy of classification