Claim Missing Document
Check
Articles

Found 24 Documents
Search

Development of irtawsi: A User-Friendly R Package for IRT Analysis Susanto, Hari Purnomo; Agus Maman Abadi; Haryanto; Retnawati, Heri; Ali, Rade Muhammad; Djidu, Hasan
JP3I (Jurnal Pengukuran Psikologi dan Pendidikan Indonesia) Vol. 14 No. 1 (2025): JP3I
Publisher : FAKULTAS PSIKOLOGI UIN SYARIF HIDAYATULLAH JAKARTA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15408/jp3i.v14i1.32091

Abstract

The complexity of the IRT analysis makes it difficult to perform manually, therefore requiring easy-to-use software. While many software options exist for IRT analysis, the high cost of paid software can make it inaccessible for many students and lecturers in Indonesia. While the mirt package provides a complete, free option for IRT analysis, proficiency in the R programming language is required. This study aims to develop an R package for IRT analysis, equipped with a user-friendly interface based on the mirt package, designed to be easy to use for beginners in IRT analysis. The System Development Life Cycle (SDLC) model is used for development and includes five stages: Planning, Analysis, Design, Implementation, and System. The resulting package is named irtawsi and includes functionality comparable to paid software. This package can calibrate both test and non-test instruments using various IRT models, such as the Rasch, 2PL, 3PL, 4PL, GRM, PCM, and GPCM models. The irtawsi package functionality includes: (1) an easy-to-use user interface, (2) automatic interpretation of analysis results, (3) a guide for IRT analysis, (4) recommendations when assumptions are not met, (5) an HTML report format for analysis results,(6) support for two languages (Indonesian and English), (7) it is free, and (8) can be installed on Windows, macOS, and Linux operating systems. The results of this development contribute to the calibration process, making it easier for practitioners and researchers to calibrate the instruments being developed, especially for beginners who are learning IRT.
Polytomous scoring correction and its effect on the model fit: A case of item response theory analysis utilizing R Santoso, Agus; Pardede, Timbul; Apino, Ezi; Djidu, Hasan; Rafi, Ibnu; Rosyada, Munaya Nikma; Retnawati, Heri; Kassymova, Gulzhaina K.
Psychology, Evaluation, and Technology in Educational Research Vol. 5 No. 1 (2022)
Publisher : Research and Social Study Institute

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33292/petier.v5i1.148

Abstract

In item response theory, the number of response categories used in polytomous scoring has an effect on the fit of the model used. When the initial scoring model yields unsatisfactory estimates, corrections to the initial scoring model need to be made. This exploratory descriptive study used response data from Take Home Exam (THE) participants in the Statistical Methods I course organized by the Open University, Indonesia, in 2022. The stages of data analysis include coding the rater’s score; analyzing frequency; analyze the fit of the model based on graded, partial, and generalized partial credit models; analyze the characteristic response function (CRF) curve; scoring correction (rescaling); and re-analyze the fit of the model. The fit of the model is based on the chi-square test and the root mean square error of approximation (RMSEA). All model fit analyzes were performed by using R. The results revealed that scoring corrections had an effect on model fit and that the partial credit model (PCM) produced the best item parameter estimates. All results and their implications for practice and future research are discussed.
Pengaruh Self-Efficacy Terhadap Hasil Belajar Matematika Melalui Media Pembelajaran Digital Jahring; Hasan Djidu
GJET : Global Journal of Educational Technology Vol. 1 No. 1 (2024): Desember 2024
Publisher : Perhimpunan Ahli Teknologi Informasi dan Komunikasi Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.71234/gjet.v1i1.35

Abstract

This study seeks to examine the impact of self-efficacy on mathematics learning outcomes via digital learning media. This research utilized a quantitative method with a descriptive correlational design. The sample comprised 57 tenth-grade pupils chosen by a saturation sampling technique. Data were gathered using a self-efficacy questionnaire employing a 5-point Likert scale consisting of 16 items, along with a curriculum-based assessment of learning outcomes. Regression analysis indicated that self-efficacy significantly affected learning outcomes, with a p-value of 0.044 at a 95% confidence interval. The negative regression coefficient (B = -1.024) suggested that an enhancement in self-efficacy correlated with a decline in students' learning results. This implies that additional factors, such as digital literacy or learning methodologies, may influence the association. These findings underscore the need of a pedagogical approach that not only bolsters self-efficacy but also fosters digital media proficiency to attain optimal learning outcomes
The effect of scoring correction and model fit on the estimation of ability parameter and person fit on polytomous item response theory Santoso, Agus; Pardede, Timbul; Djidu, Hasan; Apino, Ezi; Rafi, Ibnu; Rosyada, Munaya Nikma; Abd Hamid, Harris Shah
REID (Research and Evaluation in Education) Vol. 8 No. 2 (2022)
Publisher : Graduate School of Universitas Negeri Yogyakarta & Himpunan Evaluasi Pendidikan Indonesia (HEPI)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21831/reid.v8i2.54429

Abstract

Scoring quality has been recognized as one of the important aspects that should be of concern to both test developers and users. This study aimed to investigate the effect of scoring correction and model fit on the estimation of ability parameters and person fit in the polytomous item response theory. The result of 165 students in the Statistics course (SATS4410) test at one of the universities in Indonesia was used to answer the problems in this study. The polytomous data obtained from scoring the test results were analyzed using the Item Response Theory (IRT) approach with the Partial Credit Model (PCM), Graded Response Model (GRM), and Generalized Partial Credit Model (GPCM). The effect of scoring correction and model fit on the estimation of ability and person fit was tested using multivariate analysis. Among the three models used, GRM showed the best fit based on p-value and RSMEA. The results of the analysis also showed that there was no significant effect of scoring correction and model fit on the estimation of the test taker's ability and person fit. From the results of this study, we recommend the importance of evaluating the levels or categories used in scoring student work on a test.