This study aims to analyze the effectiveness of the Halving Random Search Cross Validation method as an alternative for hyperparameter optimization in machine learning models compared to Grid Search Cross Validation and Random Search Cross Validation. The dataset used is Internet Service Churn with four algorithms: KNN, Decision Tree, SVM, and Gaussian Naive Bayes. The testing process involves 10-fold cross validation and three repetitions to ensure the validity of the results. The experimental results show that Halving Random Search Cross Validation is able to achieve competitive accuracy, precision, and recall performance (difference < 0.5%) compared to Grid Search in most models, with computational time savings of up to 62–74% on KNN, Decision Tree, and SVM. However, on Gaussian Naive Bayes with a small hyperparameter space, this method is slower due to the successive halving overhead. Random Search shows high speed but less stable on SVM and Gaussian Naive Bayes. The research conclusion states that Halving Random Search Cross Validation is the most balanced method for business cases such as churn prediction, with recommendations for application on complex models and further development using Hyperband or Bayesian Optimization
Copyrights © 2025