The digitalization of academic systems enhances efficiency and reliability, particularly in language competency testing. This study develops an advanced multilingual online exam system integrated with face and voice recognition technologies at Universitas Nurul Jadid. Employing the Waterfall model, the system was designed, programmed, tested, and maintained to address challenges such as network instability and delayed scoring. The system supports English, Arabic, and Mandarin language exams and includes automatic certificate generation. Developed using Laravel 11 and TensorFlow for face detection, the system improves the reliability and security of online exams while maintaining functionality under low network conditions. Testing revealed significant improvements in efficiency and accuracy, although enhancements to user interface responsiveness and scalability for handling high test-taker volumes remain necessary. This research provides a scalable framework for other institutions aiming to modernize academic evaluations.