Claim Missing Document
Check
Articles

Found 3 Documents
Search

Transfer learning with multiple pre-trained network for fundus classification Wahyudi Setiawan; Moh. Imam Utoyo; Riries Rulaningtyas
TELKOMNIKA (Telecommunication Computing Electronics and Control) Vol 18, No 3: June 2020
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12928/telkomnika.v18i3.14868

Abstract

Transfer learning (TL) is a technique of reuse and modify a pre-trained network. It reuses feature extraction layer at a pre-trained network. A target domain in TL obtains the features knowledge from the source domain. TL modified classification layer at a pre-trained network. The target domain can do new tasks according to a purpose. In this article, the target domain is fundus image classification includes normal and neovascularization. Data consist of 100 patches. The comparison of training and validation data was 70:30. The selection of training and validation data is done randomly. Steps of TL i.e load pre-trained networks, replace final layers, train the network, and assess network accuracy. First, the pre-trained network is a layer configuration of the convolutional neural network architecture. Pre-trained network used are AlexNet, VGG16, VGG19, ResNet50, ResNet101, GoogLeNet, Inception-V3, InceptionResNetV2, and squeezenet. Second, replace the final layer is to replace the last three layers. They are fully connected layer, softmax, and output layer. The layer is replaced with a fully connected layer that classifies according to number of classes. Furthermore, it's followed by a softmax and output layer that matches with the target domain. Third, we trained the network. Networks were trained to produce optimal accuracy. In this section, we use gradient descent algorithm optimization. Fourth, assess network accuracy. The experiment results show a testing accuracy between 80% and 100%.
Reconfiguration layers of convolutional neural network for fundus patches classification Wahyudi Setiawan; Moh. Imam Utoyo; Riries Rulaningtyas
Bulletin of Electrical Engineering and Informatics Vol 10, No 1: February 2021
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v10i1.1974

Abstract

Convolutional neural network (CNN) is a method of supervised deep learning. The architectures including AlexNet, VGG16, VGG19, ResNet 50, ResNet101, GoogleNet, Inception-V3, Inception ResNet-V2, and Squeezenet that have 25 to 825 layers. This study aims to simplify layers of CNN architectures and increased accuracy for fundus patches classification. Fundus patches classify two categories: normal and neovascularization. Data used for classification is MESSIDOR and Retina Image Bank that have 2,080 patches. Results show the best accuracy of 93.17% for original data and 99,33% for augmentation data using CNN 31 layers. It consists input layer, 7 convolutional layers, 7 batch normalization, 7 rectified linear unit, 6 max-pooling, fully connected layer, softmax, and output layer.
DIMENSI METRIK KETETANGGAAN LOKAL GRAF HASIL OPERASI k-COMB Fryda Arum Pratama; Liliek Susilowati; Moh. Imam Utoyo
Contemporary Mathematics and Applications (ConMathA) Vol. 1 No. 1 (2019)
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (418.34 KB) | DOI: 10.20473/conmatha.v1i1.14771

Abstract

Research on the local adjacency metric dimension has not been found in all operations of the graph, one of them is comb product graph. The purpose of this research was to determine the local adjacency metric dimension of k-comb product graph and level  comb product graph between any connected graph G and H. In this research graph G and graph H such as cycle graph, complete graph, path graph, and star graph. K-comb product graph between any graph G and H denoted by GokH. While level k comb product graph between any graph G and H denoted by GokH.In this research, local adjacency metric dimension of GokSm graph only dependent to multiplication of the cardinality of V(G) and many of k value, while GokKm graph and GokCm graph is dependent to dominating number of G and multiplication of the cardinality of V(G), many of k value, and local adjacency metric dimension of Km graph or Cm graph. And then, local adjacency metric dimension of GokSm graph only dependent to the cardinality of V(Gok-1Sm), while GokKm graph and GokCm graph is dependent to dominating number of G and multiplication of the local adjacency metric dimension of Km graph or Cm graph with cardinality of V(Gok-1Km) or V(Gok-1Cm).