Local Wi-Fi network access has become a common necessity in everyday digital activities, but it is vulnerable to misuse to access negative content. This content includes pornographic material, hate speech, and violent content that can adversely affect users, especially in educational settings. For this reason, a system that is able to filter malicious content automatically and efficiently is needed. This research aims to design an artificial intelligence-based negative content filtering system that can be run on local network devices. The methods used include image classification using Convolutional Neural Network (CNN) and Artificial Neural Network (ANN), as well as text classification with DistilBERT and Support Vector Machine (SVM). To maintain user privacy, the model is trained using a federated learning approach that allows for decentralized learning. Knowledge distillation is also applied to produce lightweight models that can be run on edge devices such as routers. The datasets used include NSFW Image Dataset, OpenPornSet, as well as a collection of toxic comments from Reddit and Twitter. The evaluation was carried out in a simulation of a local network with 50 active devices. The test results showed an ANN accuracy rate of 93.4% in recognizing visual content, and SVM accuracy of 91.7% in detecting text-based hate speech. This research can be a reference in the application of AI-based content filtering systems for safe and responsible digital access protection
Copyrights © 2025