Claim Missing Document
Check
Articles

Found 2 Documents
Search

Imputing Data and Predicting Waste with Machine Learning in East Java Khoirunisa, Rifa; Sani, Ahmad Faisal; Riatma, Darmawan Lahru; Masbahah, Masbahah; Rachman, Yusuf Fadlila
Brilliance: Research of Artificial Intelligence Vol. 5 No. 1 (2025): Brilliance: Research of Artificial Intelligence, Article Research May 2025
Publisher : Yayasan Cita Cendekiawan Al Khwarizmi

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/brilliance.v5i1.6461

Abstract

Indonesia's waste problem continues to be a pressing environmental issue, along with the increasing population and urbanization activities. The increase in population and changes in consumption patterns have led to a significant spike in waste generation in Indonesia. Machine learning-based approaches become highly relevant in supporting accurate predictive systems to estimate waste generation, so that it can be used as a basis for policy making and planning for more effective and sustainable waste management. However,the issue of missing data is a common challenge in environmental data processing, including in the recording of waste generation. Incomplete waste generation data can hinder accurate analysis and prediction, which are essential for effective environmental management planning. This study aims to analyze the effectiveness of various data imputation methods and to develop a predictive model for waste generation in East Java Province using a machine learning approach. The imputation techniques tested include Mean Imputation, K-Nearest Neighbor (KNN), and Interpolation, while the predictive models used include Random Forest, Gradient Boosting, and KNN Regression. The dataset was obtained from the official SIPSN (National Waste Management Information System) website. Model performance was evaluated using metrics such as Root Mean Square Error (RMSE). The results indicate that the combination of KNN Imputer with the Gradient Boosting prediction model is effective in addressing missing data and predicting waste generation in East Java Province, achieving an RMSE value of 0.147. These findings are expected to support more accurate decision-making in waste management planning for the province.
Benchmarking GPU Passthrough Performance on Docker for AI Cloud System Sani, Ahmad Faisal; Khoirunisa, Rifa; Riatma, Darmawan Lahru; Rachman, Yusuf Fadlila; Masbahah, Masbahah
Brilliance: Research of Artificial Intelligence Vol. 5 No. 2 (2025): Brilliance: Research of Artificial Intelligence, Article Research November 2025
Publisher : Yayasan Cita Cendekiawan Al Khwarizmi

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47709/brilliance.v5i2.6794

Abstract

The use of artificial intelligence (AI), which depends only on CPU resources, tends to result in longer execution times or CPU time. Especially when handling large amounts or complex workloads. To overcome that issue, the use of a graphics processing unit (GPU) becomes a significant support. GPUs can significantly speed up AI inferences through their parallel architecture. One recent approach to integrating GPUs into an AI system is called GPU passthrough. Either natively (native environment), or through Docker environment. However, until recently, the efficiency and results between those methods have remained unexplored, particularly in local cloud environment.This study aimed to compare GPU performance between native and Docker environment using a 10.000 x 10.000 matrix multiplication workload with the TensorFlow frameworks. Execution time and GPU performance measured using the nvidia-smi tool. Data is recorded automatically in CSV format. The researcher used the NVIDIA CUDA environment to ensure full compatibility with GPU acceleration.The result demonstrated that GPU processing in native environment had faster average time, as in 1.52 seconds. In another case, GPU passthrough in docker environment demonstrated higher GPU utilization, as in 86.2% but had a longer execution time.These findings indicate that GPU overhead occurred in docker environment due to the containerization layer. On the contrary, the native environment resulted in shorter execution time, even though it did not maximize the GPU utilization. These results provide valuable basis data for technical decision-making in GPU-based AI deployment in a limited environment.