Software testing processes often encounter challenges in preparing realistic, consistent, and efficient test data. This study aims to analyze the implementation of Laravel Seeder and Faker as automated mechanisms for generating test data during the testing phase of Laravel-based web applications, specifically using a simple library system as a case study. The main objective is to evaluate the effectiveness of these two methods in supporting application testing, focusing on time efficiency and the representativeness of generated data under real-world conditions. Experiments were conducted on three main entities—User, Category, and Books—across three dataset scales: small (10 records), medium (100 records), and large (1000 records). The results show that the manual seeder achieved faster execution times (2.17s to 222.58s for 10–1000 records), while the Faker seeder required slightly longer times (2.03s to 253.43s) due to randomized data generation. However, Faker produced more diverse and realistic datasets, making it better suited for stress testing and performance evaluation scenarios. This study concludes that the manual seeder is more appropriate for application logic validation and integration testing, whereas the Faker seeder is more effective for user behavior simulation and large-scale load testing. A hybrid approach combining both methods is recommended to balance efficiency and realism in the test data generation process for Laravel-based software testing.
Copyrights © 2025