The growing use of MOOCs in the post-pandemic era, particularly in developing countries, requires the availability of valid assessment tools to ensure software quality that meets users' needs. However, several tools are still being used without a proper content validation process, which risks producing biased and unrepresentative data. This study aims to evaluate the validity of the content of an assessment instrument designed to measure the dimension of software quality on the Massive Open Online Courses (MOOC) platform, particularly in the context of the increased adoption of online learning post-pandemic in developing countries. The instrument comprises 27 statement items representing ten quality software factors: functionality, usability, reliability, performance, security, maintainability, portability, compatibility, support, and integration. The validation was carried out by involving seven experts in information systems and digital learning. The method used is an item-level content validity index (I-CVI) based on a descriptive quantitative approach, with each item being assessed using a 5-point Likert scale. An item is declared valid if it obtains an I-CVI score of ? 0.79. The analysis showed that 21 items were valid; three needed to be revised at the I-CVI value between 0.70–0.78, and 3 invalid items at the I-CVI value 0.70. The functionality, usability, support, and integration quality factors had the highest levels of validity, while the safety and support dimensions showed a higher degree of divergence in the expert assessment. These findings highlight the need for content validation to ensure MOOC indicators are accurate and relevant. The study showed the need for advanced validation tests involving real users and other validation methods, such as Aiken V or the fuzzy analysis hierarchy process (FAHP) to enhance the reliability and practical relevance of the tools developed.