One of the most important aspects of classification is choosing features in such a way as to get rid of redundant or irrelevant elements in the dataset. For the most part, multi-objective feature selection strategies have been offered by a number of scholars as a strategy for this aim. On the other hand, these techniques frequently fail to simultaneously improve classification accuracy while removing redundant feature combinations. This article presents a wrapper-based feature selection strategy that strikes a compromise between classification accuracy and redundancy reduction by combining features of the multi objective (MO) based honey badger algorithm (MO-HBA) and non-dominated sorting genetic algorithm-II (NSGA-II). The technique was developed as part of this investigation. Increasing the accuracy of the classification while simultaneously reducing the number of redundant characteristics is one of the optimizations aims of this approach. The MO-HBA shows excellent performance in exploration and exploitation. A Kernel version of the extreme learning machine (KELM) is used for the process of selecting the features to use. In order to evaluate how well this method of feature selection performs, eighteen benchmark datasets are utilized, and the results are compared to four established methods of multi-objective feature selection based on different metrics.
Copyrights © 2024