The increasing complexity of 6G network slicing introduces new challenges in identifying abnormal behavior within highly virtualized and dynamic network infrastructures. This study aims to address the anomaly detection problem in 6G slicing environments by comparing the performance of three models: a supervised random forest classifier, a basic unsupervised autoencoder, and an optimized deep autoencoder enhanced with L1 regularization and dropout techniques. The optimized autoencoder is trained to reconstruct normal data patterns, with anomaly detection performed using a threshold- based reconstruction error approach. Reconstruction errors are evaluated across different percentile thresholds to determine the optimal boundary for classifying abnormal behavior. All models are tested on a publicly available 6G Network Slicing Security dataset. Results show that the optimized autoencoder outperforms both the baseline autoencoder and the random forest in terms of anomaly sensitivity. Specifically, the optimized model achieves an F1- score of 0.1782, a recall of 0.2095, and an accuracy of 0.714. These results indicate that introducing regularization and dropout significantly improves the ability of autoencoders to generalize and isolate anomalies, even in highly imbalanced datasets. This approach provides a lightweight and effective solution for unsupervised anomaly detection in next- generation network environments.
Copyrights © 2025