This study examines the growing complexity of cyber threats that increasingly challenge the effectiveness of traditional Network Intrusion Detection Systems (NIDS). Modern attacks, particularly zero-day intrusions, require detection approaches capable of handling high-dimensional network traffic data. However, existing studies rarely examine the trade-off between feature efficiency and generalization performance in boosting-based NIDS under controlled feature-reduction strategies. Moreover, the role of statistical feature selection in mitigating overfitting in classical boosting models remains underexplored. This study evaluates the performance of NIDS by combining boosting ensemble algorithms, namely AdaBoost, Gradient Boosting, and XGBoost, with filter-based feature selection methods, including Information Gain, Chi-Square, and ReliefF. The NSL-KDD dataset is used as the primary benchmark, with Min–Max normalization applied during preprocessing to ensure numerical feature consistency. Model development is conducted using Orange Data Mining, and performance is assessed through 10-fold cross-validation. Experimental results show that Gradient Boosting achieves the highest baseline accuracy among the evaluated models. Further performance improvements are obtained through feature selection, with the Chi-Square method yielding the best result at 81.2% accuracy using 19 selected features. Information Gain also enhances performance, achieving 80.8% accuracy with 13 features, while ReliefF provides comparatively lower gains. These findings demonstrate that effective feature reduction improves generalization performance, reduces computational complexity, and mitigates overfitting. Overall, the proposed combination of Gradient Boosting and statistical feature selection provides a feature-efficient, generalizable intrusion detection strategy for modern network environments.
Copyrights © 2026