Claim Missing Document
Check
Articles

Found 2 Documents
Search

Traditional-Enhance-Mobile-Ubiquitous-Smart: Model Innovation in Higher Education Learning Style Classification Using Multidimensional and Machine Learning Methods Santiko, Irfan; Soeprobowati, Tri Retnaningsih; Surarso, Bayu; Tahyudin, Imam; Hasibuan, Zainal Arifin; Che Pee, Ahmad Naim
Journal of Applied Data Sciences Vol 6, No 1: JANUARY 2025
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v6i1.598

Abstract

Learning achievement is undoubtedly impacted by each person's unique learning style. The assessment pattern is less focused due to the intricacy of the current components. In fact, general elements like VARK are thought to create complexity that can impair focus when combined with elements like environmental conditions, teacher effectiveness, and stakeholder policies. Although it is only ideal in specific areas, the application of supported information technology has so far yielded positive results. This essay attempts to be creative in evaluating how well students learn in higher education settings. An assessment framework that uses multidimensionality and simplifies features is the innovation that is being offered. Method, Material, and Media (3M) are the three categories into which simplification of aspects is separated. However, the Dimensions are categorized into five groups: Traditional, Enhance, Mobile, Ubiquitous, and Smart (TEMUS). Approximately 1200 respondents consisting of students and lecturers formed into a dataset in 2 types of data, namely test data and training data. The trial was conducted using 4 models, namely Random Forest, SVM, Decision Tree, and K-Nearest. The test results were interpreted in MSE, R-Square, Accuracy, Recall, Precision, and F1-Score. Based on the comparison of test results, it states that Random Forest has the most optimal results with MSE values of 0.46, R Square 0.99, Accuracy 0.86, Recall 0.86, Precision 0.87, F1 Score 0.84. Based on the results obtained, it proves that in addition to being able to carry out the classification process, the TEMUS Dimensional Framework can form a pattern of compatibility with each other, between the learning styles of Lecturers and Students. According to this TEMUS framework, teacher and student performance will be deemed suitable and effective when the 3M components are assessed from both perspectives in the same way. If not, a review will be conducted.
Advancing the Measurement of MOOCs Software Quality: Validation of Assessment Tools Using the I-CVI Expert Framework Praseptiawan, Mugi; Che Pee, Ahmad Naim; Zakaria, Mohd Hafiz; Noertjahyana, Agustinus
International Journal of Engineering, Science and Information Technology Vol 5, No 3 (2025)
Publisher : Malikussaleh University, Aceh, Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52088/ijesty.v5i3.911

Abstract

The growing use of MOOCs in the post-pandemic era, particularly in developing countries, requires the availability of valid assessment tools to ensure software quality that meets users' needs. However, several tools are still being used without a proper content validation process, which risks producing biased and unrepresentative data. This study aims to evaluate the validity of the content of an assessment instrument designed to measure the dimension of software quality on the Massive Open Online Courses (MOOC) platform, particularly in the context of the increased adoption of online learning post-pandemic in developing countries. The instrument comprises 27 statement items representing ten quality software factors: functionality, usability, reliability, performance, security, maintainability, portability, compatibility, support, and integration. The validation was carried out by involving seven experts in information systems and digital learning. The method used is an item-level content validity index (I-CVI) based on a descriptive quantitative approach, with each item being assessed using a 5-point Likert scale. An item is declared valid if it obtains an I-CVI score of ? 0.79. The analysis showed that 21 items were valid; three needed to be revised at the I-CVI value between 0.70–0.78, and 3 invalid items at the I-CVI value 0.70. The functionality, usability, support, and integration quality factors had the highest levels of validity, while the safety and support dimensions showed a higher degree of divergence in the expert assessment. These findings highlight the need for content validation to ensure MOOC indicators are accurate and relevant. The study showed the need for advanced validation tests involving real users and other validation methods, such as Aiken V or the fuzzy analysis hierarchy process (FAHP) to enhance the reliability and practical relevance of the tools developed.