No-reference Video Quality Assessment (VQA) presents a critical challenge in digital multimedia. This study explores video quality measurement using the DOVER framework combined with a transfer learning method. While existing approaches often rely on end-to-end fine-tuning that requires substantial computational resources, this study introduces and validates a more efficient implementation. The model was built using Google Colab and Python, with the KoNViD-1k dataset as the training base. A head-only transfer learning approach was employed, using the DOVER framework as its foundation. This approach addresses a key research gap in resource-efficient no-reference VQA, as many state-of-the-art models remain impractical for real-world deployment due to high computational demands. The training process was conducted over 10 epochs with resource efficiency in mind. The head-only transfer learning technique allows for GPU memory optimization, showing minimal accuracy differences (1%–2%) compared to full end-to-end fine-tuning. Unlike previous studies that compromise performance for efficiency, this approach maintains competitive accuracy while significantly lowering computational costs. The results show that the proposed method delivers accurate and efficient video quality assessments, confirming the potential of the DOVER framework in no-reference VQA. This study highlights a practical balance between computational efficiency and assessment accuracy using transfer learning techniques.