cover
Contact Name
Rizki Wahyudi
Contact Email
rizki.key@gmail.com
Phone
+6281329125484
Journal Mail Official
telematika@amikompurwokerto.ac.id
Editorial Address
The Telematika, with registered number ISSN 2442-4528 (online) ISSN 1979-925X (print) is a scientific journal published by Universitas Amikom Purwokerto. The journal registered in the CrossRef system with Digital Object Identifier (DOI) prefix 10.35671/telematika. The aim of this journal publication is to disseminate the conceptual thoughts or ideas and research results that have been achieved in the area of Information Technology and Computer Science. Every article that goes to the editorial staff will be selected through Initial Review processes by the Editorial Board. Then, the articles will be sent to the Mitra Bebestari/ peer reviewer and will go to the next selection by Double-Blind Preview Process. After that, the articles will be returned to the authors to revise. These processes take a month for a minimum time. In each manuscript, Mitra Bebestari/ peer reviewer will be rated from the substantial and technical aspects. The final decision of articles acceptance will be made by Editors according to Reviewers comments. Mitra Bebestari/ peer reviewer that collaboration with The Telematika is the experts in the Information Technology and Computer Science area and issues around it.
Location
Kab. banyumas,
Jawa tengah
INDONESIA
Telematika
ISSN : 1979925X     EISSN : 24424528     DOI : 10.35671/telematika
Core Subject : Education,
Jl. Letjend Pol. Soemarto No.126, Watumas, Purwanegara, Kec. Purwokerto Utara, Kabupaten Banyumas, Jawa Tengah 53127
Arjuna Subject : -
Articles 235 Documents
Modification CNN Transfer Learning for Classification MRI Brain Tumor Wardhani, Retno; Nafi'iyah, Nur
Telematika Vol 16, No 2: August (2023)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v16i2.2272

Abstract

Identification, or detecting the infected part of a brain tumor on an MRI image, requires precision and takes a long time. MRI (Magnetic Resonance Imaging) is a magnetic resonance imaging technique to examine and take pictures of organs, tissues, and skeletal systems. The brain is essential because it is the center of the nervous system, which controls all human activities. Therefore, MRI of the brain has an important role, one of which is used for analysis or consideration before performing surgery. However, MRI images cannot provide optimal results when analyzed due to noise, and the bone and tumor (lumps of flesh) have the same appearance. AI (artificial intelligence), or digital image processing and computer vision, can analyze MRI images to detect or identify tumors correctly. This study proposes changes to the last layer of CNN (Convolution Neural Network) transfer learning (VGG16, InceptionV3, and ResNet-50) to identify brain tumor disease on MRI. Data were taken from Kaggle with types of glioma, meningioma, no tumor, and pituitary, with a total of 5712 training images and 1311 testing images. The proposed changes include a flattening layer and a pooling layer. The result is that replacing the flatten layer further improves accuracy, and the accuracy of the transfer learning CNNs (VGG16, InceptionV3, and ResNet-50) is 0.918, 0.762, and 0.934, respectively.
SEIHR Model on Spread of COVID-19 and Its Simulation Rois, Muhammad Abdurrahman; Tafrikan, Mohamad; Norasia, Yolanda; Anggriani, Indira; Ghani, Mohammad
Telematika Vol 15, No 2: August (2022)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v15i2.1141

Abstract

The modified SEIR model of the COVID-19 spread is divided into five compartments: susceptible, exposed, infected, and recovered. Based on the results, two equilibrium points were obtained: the disease-free equilibrium point and the endemic equilibrium point. The existence of an equilibrium point depends on the value of the basic reproduction number R0, as well as on stability. The endemic equilibrium point exists if it is satisfied R0>1. Then, the disease-free equilibrium point is said to be locally asymptotic stable if R0<1, and the endemic equilibrium point is locally asymptotic stable if R0>1. Sensitivity analysis was performed to determine the most influential parameters in the spread of the virus. Finally, the numerical simulations determine the behavior of the model and support the results of the dynamic analysis.
Comparative Analysis of Classification Methods in Sentiment Analysis: The Impact of Feature Selection and Ensemble Techniques Optimization Defit, Sarjon; Windarto, Agus Perdana; Alkhairi, Putrama
Telematika Vol 17, No 1: February (2024)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v17i1.2824

Abstract

Optimizing classification methods (forward selection, backward elimination, and optimized selection) and ensemble techniques (AdaBoost and Bagging) are essential for accurate sentiment analysis, particularly in political contexts on social media. This research compares advanced classification models with standard ones (Decision Tree, Random Tree, Naive Bayes, Random Forest, K-NN, Neural Network, and Generalized Linear Model), analyzing 1,200 tweets from December 10-11, 2023, focusing on "Indonesia" and "capres." It encompasses 490 positive, 355 negative, and 353 neutral sentiments, reflecting diverse opinions on presidential candidates and political issues. The enhanced model achieves 96.37% accuracy, with the backward selection model reaching 100% accuracy for negative sentiments. The study suggests further exploration of hybrid feature selection and improved classifiers for high-stakes sentiment analysis. With forward feature selection and ensemble method, Naive Bayes stands out for classifying negative sentiments while maintaining high overall accuracy (96.37%).
Detection and Classification of Banana Leaf Diseases: Systematic Literature Review Prasetyo, Ade; Utami, Ema
Telematika Vol 17, No 2: August (2024)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v17i2.2809

Abstract

Bananas, a staple fruit globally, are essential for sustenance, employment, and income. However, diseases like Sigatoka, Bacterial Wilt, Bunchy Top, and Fusarium Wilt pose a threat to their cultivation, affecting both small-scale and large-scale production. This survey investigates methods for the early identification and classification of these banana leaf diseases using deep learning and machine learning techniques. A systematic review of 15 studies revealed that the majority of research concentrates on binary classification, which distinguishes healthy from diseased leaves. Common preprocessing steps include image resizing, color space conversion, and background removal to improve model accuracy. We utilize techniques such as ensemble approaches, support vector machines (SVM), random forests, K-means clustering, and convolutional neural networks (CNNs), with CNNs demonstrating superior performance, achieving accuracy rates ranging from 85% to 98.97%. CNNs excel in hierarchical feature extraction but require significant computational power. Traditional machine learning methods offer simplicity and resistance to overfitting but need careful parameter tuning. Advanced deep learning architectures, such as DenseNet and Inception V3, achieve high accuracy but with greater computational demands. Lightweight models like SqueezeNet balance performance and size, but ensemble methods, while improving generalization, add complexity. The choice of method depends on dataset characteristics, available computational resources, and desired trade-offs between performance and complexity. This study provides an overview of current research in banana leaf disease classification, discussing the strengths and limitations of various approaches and suggesting directions for future research to improve detection accuracy and robustness.
Optimizing Clustering of Indonesian Text Data Using Particle Swarm Optimization Algorithm: A Case Study of the Quran Translation R Wahyudi, M Didik; Fatwanto, Agung
Telematika Vol 17, No 1: February (2024)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v17i1.2724

Abstract

The Quran considered the holy book for Muslims, contains scientific and historical facts affirming Islam's truth, beauty, and influence on human life. Consequently, the Quran text and its translations are valuable sources for text mining research, particularly for studying the interrelationship of its verses. One approach to grouping objects using certain algorithms is clustering, with K-Means Clustering being a prominent example. However, clustering results are often suboptimal due to the random selection of centroids. To address this, the study proposes using the Particle Swarm Optimization (PSO) algorithm, which selects centroids based on PSO results. The hybrid PSO algorithm initiates a single iteration of the K-means algorithm. It concludes either upon reaching the maximum iteration limit or when the average shift in the center of the mass vector falls below 0.0001. Evaluation of the clustering results from the three models indicates that the K-Means algorithm produced the lowest Sum of Squared Error (SSE) value of 1032.19. Additionally, the hybrid PSO algorithm generated the highest Silhouette value of 0.258 and the lowest quantization value of 0.00947. Further evaluation using a confusion matrix showed that K-Means clustering had an accuracy rate of 81.7%, K-Means with PSO had 82.5%, and the combination of K-Means with hybrid PSO yielded the highest accuracy rate of 91.1% among the three grouping models.
Stacked LSTM-GRU Model for Traffic Anomalies Detection Alsyaibani, Omar Muhammad Altoumi; Utami, Ema; Raharjo, Suwanto; Hartanto, Anggit Dwi
Telematika Vol 15, No 2: August (2022)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v15i2.1855

Abstract

This study aims to improve the accuracy of the intrusion detection system model. It focused on LSTM and GRU methods proposed by several previous studies. The bidirectional layer was also tested to see if it improves model performance. Dataset used in the study was CIC IDS 2017. The dataset was divided into 3 parts, for training, validation, and testing purposes. Validation data was used to evaluate model performance in every training iteration. It helped to make the model would not overfit the training data. Furthermore, Dropout layer and L2 regularization were also added to the model architecture. The training model was done in a binary classification approach with a learning rate of 0.0001. We found that the stacked method reached accuracy 98.1087% in 100 iteration training. This result is slightly higher than LSTM, GRU, Bidirectional LSTM, and Bidirectional GRU. The method which contains LSTM layer performed its best accuracy using Tanh activation. Differently, GRU and Bidirectional GRU performed the best performance with Lrelu and Prelu activation function, respectively. All models could reach the plateau in the first 20 iterations, while in the next 80 iterations the model performance still could be fluctuately improved. Even though the model already reached the plateau in 20 iteration training, it is still possible for the model to slowly improve by using a small learning rate and by implementing Dropout layer and L2 regularization. Fluctuation of model performance implies that the highest model performance was not always reached in the last training iteration. ModelCheckPoint could help to overcome the issue. In addition, the Bidirectional layer increased the complexity of the model which certainly increased training duration. The bidirectional layer improved the performance of the GRU method, but it did not improve the performance LSTM method.
CNN Pruning for Edge Computing-Based Corn Disease Detection with a Novel NG-Mean Accuracy Loss Optimization Putrada, Aji Gautama; Oktaviani, Ikke Dian; Fauzan, Mohamad Nurkamal; Alamsyah, Nur
Telematika Vol 17, No 2: August (2024)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v17i2.2899

Abstract

Plant disease detection studies disease attacks in plants detected on the leaves using computer vision. However, some plant disease detection solutions still utilize cloud computing, where the problems include slow processing times and misuse of data privacy. This study aims to evaluate the performance of convolutional neural network (CNN) pruning in edge computing-based plant disease detection. We use Kaggle's plant disease image dataset, which contains three corn diseases. We also created an edge computing system architecture for plant disease detection utilizing the latest communication technology and middleware. Next, we developed an optimal CNN model for plant disease detection using grid search. We pruned the CNN model in the final step and tested its performance. In this step, we developed a novel normalized-geometric mean (NG-mean) method for accuracy loss optimization. The test results show that class weights can optimize specificity and g-mean on the imbalanced dataset, with values of 0.995 and 0.983, respectively. The grid search results then optimize the optimization method's hyperparameters, learning rate, batch size, and epoch to achieve the highest accuracy of 0.947. Applying pruning produces several models with variations in sparsity and scheduling methods. We used the new NG-mean method to find the best compressed model. It had constant scheduling, 0.8 sparsity, a mean accuracy loss of 1.05%, and a CR of 2.71×. This study enhances the efficiency and privacy of plant disease detection by utilizing edge computing and optimizing CNN models, leading to faster processing and better data security. Future work could explore the application of the novel NG-Mean method in other domains and the integration of additional plant species and diseases into the detection system.
Identification of Social Media Posts Containing Self-reported COVID-19 Symptoms using Triple Word Embeddings and Long Short-Term Memory Amalia, Raisa; Faisal, Mohammad Reza; Indriani, Fatma; Budiman, Irwan; Mazdadi, Muhammad Itqan; Abadi, Friska; Mafazy, Muhammad Meftah
Telematika Vol 17, No 1: February (2024)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v17i1.2774

Abstract

The COVID-19 pandemic has permeated the global sphere and influenced nearly all nations and regions. Common symptoms of this pandemic include fever, cough, fatigue, and loss of sense of smell. The impact of COVID-19 on public health and the economy has made it a significant global concern. It has caused economic contraction in Indonesia, particularly in face-to-face interaction and mobility sectors, such as transportation, warehousing, construction, and food and beverages. Since the pandemic began, Twitter users have shared symptoms in their tweets. However, they couldn't confirm their concerns due to testing limitations, reporting delays, and pre-registration requirements in healthcare. The classification of text from Twitter data about COVID-19 topics has predominantly focused on sentiment analysis regarding the pandemic or vaccination. Research on identifying COVID-19 symptoms through social media messages is limited in the literature. The main objective of this study is to identify symptoms using word embedding techniques and the LSTM algorithm. Various techniques such as Word2Vec, GloVe, FastText, and a composite approach are used. LSTM is used for classification, improving upon the RNN technique. Evaluation criteria include accuracy, precision, and recall. The model with an input dimension of 147x100 achieves the highest accuracy at 89%. This study aims to find the best LSTM model for detecting COVID-19 symptoms in social media tweets. It evaluates LSTM models with different word embedding techniques and input dimensions, providing insights into the optimal text-based method for COVID-19 detection through social media texts.
Effect of Macroprudential Loan to Value (LTV) Policy using the Support Vector Regression (SVR) Approach Saadah, Siti; Purnomo, Muhammad Ridaffa
Telematika Vol 15, No 2: August (2022)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v15i2.1882

Abstract

Macroprudential policy has a goal to confine the risk and price from crises systemic, especially in managing financial stability amidst the COVID-19 pandemic. One of its instruments is Loan to Value (LTV). Ratio of LTV is a ratio between value of credit or cost that can gave from Bank Conventional or Syariah towards collateral value as property. This study aims in getting to know about its influence on citizen to take Kredit Kepemilikan Rumah (KPR). Based on the data from Central Bank Indonesia (BI) would be found about the increasing ratio of LTV yoy. The data set in this study derived from five bank with the data range being from 2014 to 2020. According to the characteristic data that will be used, thus one of the algorithm machine learning that is Support Vector Regression (SVR) was chosen as an approach to observe this trend. By using this method, the result indicated which bank that had been influenced by LTV ratio. Category of the bank who got impact are the bank that had the reverse influence between credit value of home ownership, they are Foreign Bank, Mixed Bank, Bank Persero, Bank Swasta, and Bank Perkreditan Rakyat.
Garbage Image Classifier using Modified ResNet-50 Santoso, Bagus Dwi; Nafi'iyah, Nur
Telematika Vol 17, No 2: August (2024)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v17i2.2873

Abstract

This research proposes a deep learning model pretrained with ResNet-50 to classify 12 types of garbage. The model uses a modified ResNet-50 architecture with the Adamax and Adadelta optimizers and varying learning rates (0.1, 0.01, and 0.001). Six experiments were conducted to determine the most optimal training parameter configuration for the proposed model. Results show that the model performed best with the Adadelta optimizer and a learning rate of 0.1, achieving a validation accuracy of 93.85%. In comparison, the Adamax optimizer with a learning rate of 0.001 yielded a validation accuracy of 93.44%. Despite these results, there is a tendency for misclassification in the metal, plastic, and white-glass classes. Future work should focus on addressing these misclassification issues by expanding the dataset for these problematic classes. This can be achieved either by collecting additional images specific to these classes or by employing advanced data augmentation techniques to enhance the existing dataset and improve the model's accuracy.