Yuris Mulya Saputra
Universitas Gadjah Mada

Published : 3 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 3 Documents
Search

Utilizing deep neural network for web-based blood glucose level prediction system Ganjar Alfian; Yuris Mulya Saputra; Lukman Subekti; Ananda Dwi Rahmawati; Fransiskus Tatas Dwi Atmaji; Jongtae Rhee
Indonesian Journal of Electrical Engineering and Computer Science Vol 30, No 3: June 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v30.i3.pp1829-1837

Abstract

Machine learning algorithms can be used to forecast future blood glucose (BG) levels for diabetes patients, according to recent studies. In this study, dataset from continuous glucose monitoring (CGM) system was used as the sole input for the machine learning models. To forecast blood glucose levels 15, 30, and 45 minutes in the future, we suggested deep neural network (DNN) and tested it on 7 patients with type 1 diabetes (T1D). The suggested prediction model was evaluated against a variety of machine learning models, such as k-nearest neighbor (KNN), support vector regression (SVR), decision tree (DT), adaptive boosting (AdaBoost), random forest (RF), and eXtreme gradient boosting (XGBoost). The experimental findings demonstrated that the proposed DNN model outperformed all other models, with average root mean square errors (RMSEs) of 17.295, 25.940, and 35.146 mg/dL over prediction horizons (PHs) of 15, 30, and 45 minutes, respectively. Additionally, we have included the suggested prediction model in web-based blood glucose level prediction tools. By using this web-based system, patients may readily acquire their future blood glucose levels, allowing for the generation of preventative alarms prior to crucial hypoglycemia or hyperglycemic situations.
Privacy aware-based federated learning framework for data sharing protection of internet of things devices Yuris Mulya Saputra; Ganjar Alfian
Indonesian Journal of Electrical Engineering and Computer Science Vol 31, No 2: August 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v31.i2.pp979-985

Abstract

Federated learning (FL) has emerged as one of the most effective solutions to deal with the rapid utilization of internet of things (IoT) in big data markets. Through FL, local data at each IoT device can be trained locally without sharing the local data to the cloud server. However, this conventional FL may still suffer from privacy leakage when the local data are trained, and the trained model is shared to the cloud server to update the global prediction model. This paper proposes a FL framework with privacy awareness to protect data including the trained model for IoT devices. First, a data/model encryption method using fully homomorphic encryption is introduced, aiming at protecting the data/model privacy. Then, the FL framework for the IoT with the encryption method leveraging logistic regression approach is discussed. Experimental results using random datasets show that the proposed framework can obtain higher global model accuracy (up to 4.84%) and lower global model loss (up to 66.4%) compared with other baseline methods.
Image classification-based transfer learning framework for image detection of IoT devices Yuris Mulya Saputra; Hanung Addi Chandra Utomo; Ganjar Alfian
Indonesian Journal of Electrical Engineering and Computer Science Vol 33, No 3: March 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v33.i3.pp1989-1997

Abstract

For artificial intelligence (AI) applications, centralized learning on a cloud server and local learning on an internet-of-things device may suffer from data privacy leakage due to data sharing and inaccurate prediction due to limited computing resources. Transfer learning has been proposed as one potential solution to the world's big data problems. Transfer learning eliminates the need for each internet-of-things device to share local data with the cloud server during the training process. Instead, it can go through the training process on its own, using a cloud server's pre-trained model with high accuracy. As a result, despite its limited computing resources, the internet of things device can still predict with high accuracy. This paper proposes a transfer learning model for improving image detection accuracy on IoT devices with restricted computation. To obtain accurate image classification, a deep learning approach based on convolutional neural networks is used. The proposed method with freeze and unfreeze approaches achieves a higher validation accuracy (up to 43.6%) and a lower validation loss (up to 6.5 times) than the non-transfer learning method, according to simulation results using three relevant internet-of-things datasets.