The digital transformation of lecturer performance evaluation systems in higher education has improved efficiency in data collection and reporting; however, system effectiveness depends significantly on its usability. A system that is difficult to use may reduce student participation and compromise the quality of evaluation data. This study aims to evaluate the usability level of an online lecturer performance assessment questionnaire system using the System Usability Scale (SUS). The research employed a descriptive quantitative approach involving 30 active students as respondents who had completed the online questionnaire. Data were collected using a 10-item SUS instrument with a five-point Likert scale and analyzed according to standard SUS scoring procedures, followed by descriptive interpretation and validity testing using Pearson correlation. The results showed that the average SUS score was 49.75, which falls into the “Poor” category, indicating that the system’s usability level has not yet reached an acceptable standard. Although the system is generally accessible and relatively easy to learn, several aspects—particularly navigation clarity, interface consistency, and user feedback mechanisms—require improvement. The validity test confirmed that all questionnaire items were statistically valid. These findings imply that systematic redesign and iterative usability evaluation are necessary to enhance user experience, increase student participation, and strengthen the effectiveness of lecturer performance evaluation as part of sustainable academic quality assurance in higher education institutions.
Copyrights © 2025