Claim Missing Document
Check
Articles

Found 32 Documents
Search

Fit of the 2011 Indonesian Mortality Table to Gompertz's and Makeham's Law using Maximum Likelihood Estimation Dino Agustin Putra; Nina Fitriyati; Mahmudi Mahmudi
InPrime: Indonesian Journal of Pure and Applied Mathematics Vol 1, No 2 (2019)
Publisher : Department of Mathematics, Faculty of Sciences and Technology, UIN Syarif Hidayatullah

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (2531.889 KB) | DOI: 10.15408/inprime.v1i2.13276

Abstract

AbstractThis research discusses the estimation of the parameters for Gompertz’s law and Makeham’s law using the Maximum Likelihood Estimation method. A numerical approach to estimate the parameters of Gompertz’s law is the Newton-Raphson method. In the Makeham’s law, we use the Lagrange multiplier method to solve constraints of 0.001<A<0.003, 10^(-6)<B<10^3 and 1.075<C<1.115, and Broyden as a method to estimate the parameter numerically. The estimation result shows that parameter B converges to 0.005749 and parameter C converges to 1.024738 in the Gompertz’s law. In the Makeham’s law, the estimated parameters that satisfied the constraints are A converges to 0.00300344,  B converges to 0.0002716465, and C converges to 1.113395. Based on the Average Relative Error (ARE) that calculated from the estimated for px, the 2011 Indonesian Mortality Table (the 2011 TMI) for men and for women are more accurate when approached using the Gompertz’s law than the Makeham’s law. The estimated for px uses the Gompertz’s law are very close to the px at the 2011 TMI (with Absolute Percentage Errors of less than 1%) at age intervals, for men: 0 – 10 years, 10 – 20 years, 20 – 30 years, and 60 – 70 years, and for women: 0 – 10 years, 10 – 20 years, and 70 – 80 years.Keywords: parameter estimation; Newton-Raphson method; Broyden method; Lagrange Multiplier method. AbstrakPenelitian ini membahas mengenai estimasi parameter hukum mortalitas Gompertz’s dan hukum mortalitas Makeham’s menggunakan metode Maximum Likelihood Estimation. Pendekatan numerik untuk estimasi parameter hukum mortalitas Gompertz dilakukan menggunakan metode Newton-Raphson. Untuk mengatasi syarat batas 0.001<A<0.003, 10^(-6)<B<10^3 dan 1.075<C<1.115, pada estimasi parameter hukum mortalita Makeham digunakan metode pengali Lagrange dan pendekatan numerik metode Broyden. Hasil estimasi menunjukkan bahwa parameter B konvergen ke 0,005749 dan parameter C konvergen ke 1,024738 pada hukum mortalitas Gompertz. Pada hukum mortalitas Makeham’s, hasil estimasi parameter yang memenuhi syarat batas adalah nilai A konvergen ke 0,00300344, B konvergen ke 0,0002716465, dan C konvergen ke 1,113395. Berdasarkan nilai Average Relative Error (ARE) yang dihitung untuk estimasi , Tabel Mortalita Indonesia (TMI 2011) untuk pria dan untuk wanita lebih sesuai jika didekati menggunakan hukum Gompertz daripada hukum Makeham. Estimasi  menggunakan pendekatan hukum Gompertz berada sangat dekat dengan nilai  pada TMI 2011 (dengan Mean Absolute Percentage Error kurang dari 1%) pada interval usia, untuk pria: 0 – 10 tahun, 10 – 20 tahun, 20 – 30 tahun, dan 60 – 70 tahun, dan untuk wanita: 0 – 10 tahun, 10 – 20 tahun, dan 70 – 80 tahun.Kata kunci: estimasi parameter; metode Newton-Raphson; metode Broyden; metode Pengali Lagrange.
Web Traffic Anomaly Detection using Stacked Long Short-Term Memory Fathu Rahman; Taufik Edy Sutanto; Nina Fitriyati
InPrime: Indonesian Journal of Pure and Applied Mathematics Vol 3, No 2 (2021)
Publisher : Department of Mathematics, Faculty of Sciences and Technology, UIN Syarif Hidayatullah

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15408/inprime.v3i2.21879

Abstract

AbstractAn example of anomaly detection is detecting behavioral deviations in internet use. This behavior can be seen from web traffic, which is the amount of data sent and received by people who visit websites. In this study, anomaly detection was carried out using stacked Long Short-Term Memory (LSTM). First, stacked LSTM is used to create forecasting models using training data. Then the error value generated from the prediction on test data is used to perform anomaly detection. We conduct hyperparameter optimization on sliding window parameter. Sliding window is a sub-sequential data of time-series data used as input in the prediction model. The case study was conducted on the real Yahoo Webscope S5 web traffic dataset, consisting of 67 datasets, each of which has three features, namely timestamp, value, and anomaly label. The result shows that the average sensitivity is 0.834 and the average Area Under ROC Curve (AUC) is 0.931. In addition, for some of the data used, the window size selection can affect the sum of the sensitivity and AUC values. In this study, anomaly detection using stacked LSTM is described in detail and can be used for anomaly detection in other similar problems.Keywords: time-series data; sliding window; web traffic; window size. AbstrakSalah satu contoh deteksi anomali adalah mendeteksi penyimpangan perilaku dalam penggunaan internet. Perilaku ini dapat dilihat dari web traffic, yaitu jumlah data yang dikirim dan diterima oleh orang-orang yang mengunjungi situs web. Pada penelitian ini, deteksi anomali dilakukan menggunakan Long Short-Term Mermory (LSTM) bertumpuk. Pertama, LSTM bertumpuk digunakan untuk membuat model peramalan menggunakan data latih. Kemudian nilai error yang dihasilkan dari prediksi pada data uji digunakan untuk melakukan deteksi anomali. Kami melakukan optimasi hyperparameter pada parameter sliding window. Sliding window adalah data sub-sekuensial dari data runtun waktu yang digunakan sebagai input pada model prediksi. Studi kasus dilakukan pada dataset web traffic Yahoo Webscope S5 yang terdiri dari 67 dataset yang masing-masing memiliki tiga fitur yaitu timestamp, value, dan anomaly label. Hasil menunjukkan bahwa rata-rata sensitivitas sebesar 0.834 dan rata-rata Area Under ROC Curve (AUC) sebesar 0.931. Selain itu, untuk beberapa data yang digunakan, pemilihan window size dapat mempengaruhi jumlah dari nilai sensitivitas dan AUC. Pada penelitian ini, deteksi anomali menggunakan LSTM bertumpuk dijelaskan secara rinci dan dapat digunakan untuk deteksi anomali pada permasalahan lainnya yang serupa.Kata kunci: data runtun waktu; sliding window; web traffic; window size.
Calculation and Management of Premium Funds in Sharia Insurance based on Langevin Type Model of Return on Investment Khusnul Khotimah; Mahmudi Mahmudi; Nina Fitriyati
InPrime: Indonesian Journal of Pure and Applied Mathematics Vol 1, No 2 (2019)
Publisher : Department of Mathematics, Faculty of Sciences and Technology, UIN Syarif Hidayatullah

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (2539.199 KB) | DOI: 10.15408/inprime.v1i2.13631

Abstract

AbstractThis research discusses the calculation of the premium of term life-insurance based on sharia principles. The difference between the conventional method and the sharia principle is in the concept of interest rates. In this research, the concept of interest in the conventional method is replaced by the Return on Investment (ROI) that changes stochastically following the Langevin type model. The Monte-Carlo simulation is applied to generate the ROI with some initial values. On the mechanism of premium management, we apply the system without a saving element and the Al-Mudharabah relationship where the participants will get a sharing-profit of the operating surplus if they don’t make a claim until the end of the agreement period. We assume that the administrative expenses only charged in the first year. Therefore, the operating surplus will be greater after the first year. In addition, we do 20 times of Monte–Carlo simulations to generate the ROI with initial value are 7.5%, 9%, and 10%. The result shows that the annual premiums become smaller when the ROI become greater and vice versa. This is because the company get a smaller return when the initial of ROI is small. So the annual premium will be greater. The annual premium for male participants is greater than women because the rate of death of men is greater than women. The other factors that make the annual premium more expensive are length of the agreement and greater compensation.Keywords: Langevin type model, stochastic differential equation, system without a saving element, Al-Mudharabah principle, Monte–Carlo simulation. AbstrakPenelitian ini membahas mengenai perhitungan dana premi asuransi jiwa berjangka berdasarkan prinsip–prinsip syariah. Perbedaan antara metode konvensional dengan prinsip syariah adalah pada konsep tingkat bunga. Pada penelitian ini, konsep bunga digantikan dengan nilai Return on Investment (ROI) yang berubah secara stokastik mengikuti model tipe Langevin. Simulasi Monte–Carlo diterapkan untuk membangkitkan nilai ROI menggunakan beberapa nilai awal. Pada mekanisme pengelolaan dana premi, kami menerapkan sistem tanpa unsur tabungan dan hubungan Al-Mudharabah dimana peserta akan mendapatkan bagi hasil atas surplus operasional jika peserta tersebut tidak melakukan klaim sampai akhir masa perjanjian. Kami mengasumsikan bahwa biaya administrasi hanya dibebankan pada tahun pertama. Sehingga surplus operasional akan menjadi lebih besar setelah tahun pertama. Selain itu, kami melakukan 20 kali simulasi Monte–Carlo untuk membangkitkan ROI dengan nilai awal 7.5%, 9%, dan 10%. Hasil menunjukkan bahwa premi tahunan akan semakin kecil jika nilai awal dari ROI membesar dan sebaliknya. Hal ini disebabkan oleh keuntungan perusahaan yang kecil jika nilai awal ROI semakin kecil sehingga premi tahunan haruslah lebih besar. Premi tahunan untuk peserta laki-laki cenderung lebih besar daripada premi tahunan peserta wanita. Hal ini karena tingkat kematian laki-laki lebih tinggi daripada wanita. Faktor lain yang membuat premi tahunan lebih besar adalah lamanya masa kontrak asuransi dan kompensasi yang semakin besar.Kata kunci: Model tipe Langevin, persamaan diferensial stokastik, sistem tanpa unsur tabungan, prinsip Al-Mudharabah, simulasi Monte–Carlo.
A Monte Carlo Simulation Study to Assess Estimation Methods in CFA on Ordinal Data Nina Fitriyati; Madona Yunita Wijaya
CAUCHY: Jurnal Matematika Murni dan Aplikasi Vol 7, No 3 (2022): CAUCHY: JURNAL MATEMATIKA MURNI DAN APLIKASI
Publisher : Mathematics Department, Universitas Islam Negeri Maulana Malik Ibrahim Malang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18860/ca.v7i3.14434

Abstract

Likert-type scale data are ordinal data and are commonly used to measure latent constructs in the educational, social, and behavioral sciences. The ordinal observed variables are often treated as continuous variables in factor analysis, which may cause misleading statistical inferences. Two robust estimators, i.e., unweighted least square (ULS) and diagonally weighted least square (DWLS) have been developed to deal with ordinal data in confirmatory factor analysis (CFA). Using synthetic data generated in a Monte Carlo experiment, we study the behavior of these methods (DWLS and ULS) and compare their performance with normal theory-based ML and GLS (generalized least square) under different levels of experimental conditions. The simulation results indicate that both DWLS and ULS yield consistently accurate parameter estimates across all conditions considered in this study. The Likert data can be treated as a continuous variable under ML or GLS when using at least five Likert scale points to produce trivial bias. However, these methods generally fail to provide a satisfactory fit. Empirical studies in the field of psychological measurement data are reported to present how theoretical and statistical instances have to be taken into consideration when ordinal data are used in the CFA model.Keywords: confirmatory factor analysis, diagonally weighted least square, generalized least square, Likert data, maximum likelihood.
Application Bootstrap to Estimate the Confidence Intervals of NO2 Levels in the Kriging Method Nina Fitriyati; Yanne Irene; Azzahra Benita
Jurnal EurekaMatika Vol 11, No 2 (2023): Jurnal Eurekamatika
Publisher : Universitas Pendidikan Indonesia (UPI)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.17509/jem.v11i2.66241

Abstract

NO2 levels must be monitored continuously to minimize negative environmental impacts. In general, the estimation of NO2 levels using the Kriging method produces point estimates. In this study, we developed an interval estimate for NO2 levels by applying the quasi-random Bootstrap resampling method. We used data on NO2 levels in 14 areas in South Tangerang City in 2021. The data is stationary, so the appropriate estimation method is ordinary kriging. To develop the 95% confidence interval, we applied 1000 resamplings to the Bootstrap. The estimation results show that the lowest 95% confidence interval for NO2 levels is in the range of 25.23123 – 27.82351 μgr/m3 in Pamulang Timur Village, and the highest 95% confidence interval for NO2 levels is in the range of 45.59886 – 46.08371 μgr/m3 in the Ciater Village.Keywords: Bootstrap, Confidence Interval, Kriging, Quasi-Random.  AbstrakKadar NO2 perlu dipantau secara terus menerus untuk meminimalisir dampak negatif terhadap lingkungan. Pada umumnya, estimasi kadar NO2 menggunakan metode kriging menghasilkan estimasi titik. Pada penelitian ini akan dikembangkan estimasi selang untuk kadar NO2 dengan mengaplikasikan metode resampling quasi-random bootstrap. Data yang digunakan adalah kadar NO2 pada 14 wilayah di Kota Tangerang Selatan tahun 2021. Data tersebut stasioner sehingga metode estimasi yang digunakan adalah ordinary kriging. Untuk pembentukan selang kepercayaan 95% diaplikasikan 1000 resampling pada metode bootstrap. Hasil estimasi menunjukkan bahwa selang kepercayaan kadar NO2 terkecil berada pada rentang nilai 25,23123 – 27,82351  yang berlokasi di Kelurahan Pamulang Timur dan selang kepercayaan kadar NO2 terbesar berada pada rentang 45,59886 – 46,08371  yang berlokasi di Kelurahan Ciater.  
Prediction of the Change Rate of Tumor Cells, Healthy Host Cells, and Effector Immune Cells in a Three-Dimensional Cancer Model using Extended Kalman Filter Fitriyati, Nina; Faizah, Salma Abidah; Sutanto, Taufik Edy
Jambura Journal of Biomathematics (JJBM) Volume 5, Issue 1: June 2024
Publisher : Department of Mathematics, Universitas Negeri Gorontalo

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.37905/jjbm.v5i1.24672

Abstract

In this study, we develop and implement the Extended Kalman Filter (EKF) to forecast the rate of change in tumor cells, healthy host cells, and effector immune cells within the Itik-Banks model. This novel application of EKF in cancer dynamics modeling aims to provide precise real-time estimations of cellular interactions, especially in constructing a new state space representation from the Itik-Banks model. We use a first-order Taylor series to linearize the model. The numerical simulations were performed to analyze the accuracy of this new state space with data from William Gilpin's GitHub repository. The results show that the EKF predictions strongly align with actual data, i.e., in the prior and posterior steps for tumor and healthy host cells, there is a strong agreement between the predictions and the actual data. The EKF captures the oscillatory nature of the tumor and healthy host cell population well. The peaks and troughs of the predictions align closely with the actual data, indicating the EKF's effectiveness in modeling the dynamic behavior of the tumor and healthy host cells. However, for effector immune cells, the oscillatory nature of the data in these cells gives rise to slight deviations. This represents a significant challenge in the future for updating the state space representations. Despite minor discrepancies, the EKF demonstrates a strong performance in both the training and testing data, with the posterior step estimates significantly improving the prior step accuracy. This study emphasizes the importance of data availability for accurate predictions, noting a symmetric Mean Absolute Percentage Error (sMAPE) of 35.92% when data is unavailable. Prompt correction with new data is essential to maintain accuracy. This research underscores the EKF's potential for real-time monitoring and prediction in complex biological systems.
A Monte Carlo Simulation Study to Assess Estimation Methods in CFA on Ordinal Data Fitriyati, Nina; Wijaya, Madona Yunita
CAUCHY: Jurnal Matematika Murni dan Aplikasi Vol 7, No 3 (2022): CAUCHY: JURNAL MATEMATIKA MURNI DAN APLIKASI
Publisher : Mathematics Department, Universitas Islam Negeri Maulana Malik Ibrahim Malang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18860/ca.v7i3.14434

Abstract

Likert-type scale data are ordinal data and are commonly used to measure latent constructs in the educational, social, and behavioral sciences. The ordinal observed variables are often treated as continuous variables in factor analysis, which may cause misleading statistical inferences. Two robust estimators, i.e., unweighted least square (ULS) and diagonally weighted least square (DWLS) have been developed to deal with ordinal data in confirmatory factor analysis (CFA). Using synthetic data generated in a Monte Carlo experiment, we study the behavior of these methods (DWLS and ULS) and compare their performance with normal theory-based ML and GLS (generalized least square) under different levels of experimental conditions. The simulation results indicate that both DWLS and ULS yield consistently accurate parameter estimates across all conditions considered in this study. The Likert data can be treated as a continuous variable under ML or GLS when using at least five Likert scale points to produce trivial bias. However, these methods generally fail to provide a satisfactory fit. Empirical studies in the field of psychological measurement data are reported to present how theoretical and statistical instances have to be taken into consideration when ordinal data are used in the CFA model.Keywords: confirmatory factor analysis, diagonally weighted least square, generalized least square, Likert data, maximum likelihood.
Web Traffic Anomaly Detection using Stacked Long Short-Term Memory Rahman, Fathu; Sutanto, Taufik Edy; Fitriyati, Nina
InPrime: Indonesian Journal of Pure and Applied Mathematics Vol 3, No 2 (2021)
Publisher : Department of Mathematics, Faculty of Sciences and Technology, UIN Syarif Hidayatullah

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15408/inprime.v3i2.21879

Abstract

AbstractAn example of anomaly detection is detecting behavioral deviations in internet use. This behavior can be seen from web traffic, which is the amount of data sent and received by people who visit websites. In this study, anomaly detection was carried out using stacked Long Short-Term Memory (LSTM). First, stacked LSTM is used to create forecasting models using training data. Then the error value generated from the prediction on test data is used to perform anomaly detection. We conduct hyperparameter optimization on sliding window parameter. Sliding window is a sub-sequential data of time-series data used as input in the prediction model. The case study was conducted on the real Yahoo Webscope S5 web traffic dataset, consisting of 67 datasets, each of which has three features, namely timestamp, value, and anomaly label. The result shows that the average sensitivity is 0.834 and the average Area Under ROC Curve (AUC) is 0.931. In addition, for some of the data used, the window size selection can affect the sum of the sensitivity and AUC values. In this study, anomaly detection using stacked LSTM is described in detail and can be used for anomaly detection in other similar problems.Keywords: time-series data; sliding window; web traffic; window size. AbstrakSalah satu contoh deteksi anomali adalah mendeteksi penyimpangan perilaku dalam penggunaan internet. Perilaku ini dapat dilihat dari web traffic, yaitu jumlah data yang dikirim dan diterima oleh orang-orang yang mengunjungi situs web. Pada penelitian ini, deteksi anomali dilakukan menggunakan Long Short-Term Mermory (LSTM) bertumpuk. Pertama, LSTM bertumpuk digunakan untuk membuat model peramalan menggunakan data latih. Kemudian nilai error yang dihasilkan dari prediksi pada data uji digunakan untuk melakukan deteksi anomali. Kami melakukan optimasi hyperparameter pada parameter sliding window. Sliding window adalah data sub-sekuensial dari data runtun waktu yang digunakan sebagai input pada model prediksi. Studi kasus dilakukan pada dataset web traffic Yahoo Webscope S5 yang terdiri dari 67 dataset yang masing-masing memiliki tiga fitur yaitu timestamp, value, dan anomaly label. Hasil menunjukkan bahwa rata-rata sensitivitas sebesar 0.834 dan rata-rata Area Under ROC Curve (AUC) sebesar 0.931. Selain itu, untuk beberapa data yang digunakan, pemilihan window size dapat mempengaruhi jumlah dari nilai sensitivitas dan AUC. Pada penelitian ini, deteksi anomali menggunakan LSTM bertumpuk dijelaskan secara rinci dan dapat digunakan untuk deteksi anomali pada permasalahan lainnya yang serupa.Kata kunci: data runtun waktu; sliding window; web traffic; window size.
Variasi spasial dan temporal nilai-b pada gempa bumi di wilayah Sulawesi Tengah, Gorontalo, dan sekitarnya menggunakan metode robust fitting Fitriyati, Nina; Wijaya, Madona Yunita; Bisyri, M. Alvi
Majalah Ilmiah Matematika dan Statistika Vol 22 No 2 (2022): Majalah Ilmiah Matematika dan Statistika
Publisher : Jurusan Matematika FMIPA Universitas Jember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.19184/mims.v22i2.33817

Abstract

This study discusses variation in seismic and tectonic modeled by a Gutenberg-Richter relationship for earthquakes in the Central Sulawesi, Gorontalo, and surrounding areas using the Robust Fitting Method (RFM) with the weight function of Tukey’s bisquare. The declustering process on earthquake data is carried out using the Reasenberg equation. The values for both parameters are analyzed spatially and temporally. In the spatial analysis, the research area is divided into 43 grids. In the temporal analysis, the research area is divided into zone A and zone B. The data grouping is done using a sliding time window method, i.e., grouping 50 earthquake catalogs with 5 overlapping events. The results according to spatial analysis show that the b-values range from 0.38 – 1.19. Areas with low b-values (0.38 – 0.7) occur around the Palu-Koro Fault, i.e., Palu city, Malacca strait, and to Toli-Toli, and also in the northern region of Gorontalo, i.e., the subduction plate of the Sulawesi Sea. Meanwhile, high b-values (0.71 – 1.19) are in the Tomini Bay area which is an area with frequent occurrence of earthquakes but has the small potential to generate large-scale earthquakes. The results of the temporal b-value estimation in zones A and B range between values of 0.38 - 1.25. The b-values appear to decrease before the occurrence of major earthquakes in 1996 and 2018 in zone A. The b-values decreased before the occurrence of major earthquakes in 1990, 1991, 2000, and 2008 in zone B. However, the b-values cannot be used as a precursor before the big earthquake in 1997. Keywords: Tukey’s bisquare, Reasenberg equation, Gutenberg-Richter relationship, sliding time window, Robust Fitting Method. MSC2020: 86A15
Tabarru’ Fund Sharia Insurance Using The 2019 Mortality Table, Mortality Law and Cost of Insurance Method Wulandari, Fitria Sisca; Fauziah, Irma; Fitriyati, Nina
Mathline : Jurnal Matematika dan Pendidikan Matematika Vol. 8 No. 4 (2023): Mathline: Jurnal Matematika dan Pendidikan Matematika
Publisher : Universitas Wiralodra

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31943/mathline.v8i4.542

Abstract

The sharia life insurance program has two ways of managing funds, namely involving a savings element and not involving a savings element. Programs that do not involve a savings element do not have a clear division of tabarru’ funds that must be paid participant so it is the company's job to calculate it . In calculating the percentage of tabarru' funds used method Cost of Insurance (COI) . The COI method is method for calculating tabarru’ funds  with using several parameters, namely mortality tables , investment value , management fees and discount factors . In this research, we will discuss how to obtain tabarru' funds using the 2019 Indonesian mortality table and the 2019 Indonesian mortality table with the Gompertz mortality law, Makeham mortality law and De Moivre mortality law. with method Cost of Insurance . Based on the case illustration, the results show that the tabarru' funds that must be paid by participants are directly proportional to the participant's age, management fees and insurance money, but inversely proportional to the investment value. Tabarru' funds will be greater if using the De Moivre mortality table so this can be a consideration for the company while the Makeham mortality table can be a consideration for participants.