cover
Contact Name
Rizki Wahyudi
Contact Email
rizki.key@gmail.com
Phone
+6281329125484
Journal Mail Official
telematika@amikompurwokerto.ac.id
Editorial Address
The Telematika, with registered number ISSN 2442-4528 (online) ISSN 1979-925X (print) is a scientific journal published by Universitas Amikom Purwokerto. The journal registered in the CrossRef system with Digital Object Identifier (DOI) prefix 10.35671/telematika. The aim of this journal publication is to disseminate the conceptual thoughts or ideas and research results that have been achieved in the area of Information Technology and Computer Science. Every article that goes to the editorial staff will be selected through Initial Review processes by the Editorial Board. Then, the articles will be sent to the Mitra Bebestari/ peer reviewer and will go to the next selection by Double-Blind Preview Process. After that, the articles will be returned to the authors to revise. These processes take a month for a minimum time. In each manuscript, Mitra Bebestari/ peer reviewer will be rated from the substantial and technical aspects. The final decision of articles acceptance will be made by Editors according to Reviewers comments. Mitra Bebestari/ peer reviewer that collaboration with The Telematika is the experts in the Information Technology and Computer Science area and issues around it.
Location
Kab. banyumas,
Jawa tengah
INDONESIA
Telematika
ISSN : 1979925X     EISSN : 24424528     DOI : 10.35671/telematika
Core Subject : Education,
Jl. Letjend Pol. Soemarto No.126, Watumas, Purwanegara, Kec. Purwokerto Utara, Kabupaten Banyumas, Jawa Tengah 53127
Arjuna Subject : -
Articles 9 Documents
Search results for , issue "Vol 18, No 2: August (2025)" : 9 Documents clear
Toward a Modular, Low-Latency Architecture with BERT-based Big Media Data Analysis Widyawan, Widyawan; Murti, Handoko Wisnu; Putra, Guntur Dharma; Nurmanto, Eddy; Affandi, Achmad
Telematika Vol 18, No 2: August (2025)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v18i2.3151

Abstract

The significant growth of digital and social media platforms has introduced massive streams of unstructured media data. However, current big data approaches are not specifically tailored to the high volume and velocity of media data, which consists of unstructured and lengthy full-text messages. This study proposes a modular and stream-oriented big data architecture for media data. The proposed architecture consists of data crawlers, a message broker, machine learning modules, persistent storage, and analytical dashboards, with a publish-subscribe communication pattern to enable asynchronous, decoupled data processing. The system integrates IndoBERT, a transformer-based model fine-tuned for the Indonesian language, enabling real-time semantic tagging within the streaming pipeline. The proposed solution has been implemented as a prototype using open-source technologies in an on-premise cluster. As such, the primary novelty is the successful integration and operationalization of a large, transformer-based language model (IndoBERT) within a low-latency streaming pipeline. The experimental results underscore the feasibility of deploying scalable, vendor-neutral media analytics platforms for institutions with high sensitivity to privacy and cost. Architectural quality is quantitatively evaluated through Martin's Instability Metric and Coupling Between Objects (CBO), confirming high modularity across components. The system demonstrates an end-to-end latency of 3.121 seconds, deep learning latency of 2.333 seconds, and processes 32,102 messages per day, making an explicit trade-off where the 2.333-second deep learning inference provides advanced semantic depth. This study presents a reference architecture for scalable, intelligent real-time media analytics systems that support public sector and academic deployments, requiring data privacy and control over infrastructure.
Violence and Robbery Detection System Using YOLOv5 Algorithm Based on IoT Technology Khoiriyah, Hani'atul; Abdillah, Fauzan; Aziz, Afris Nurfal; Wiryawan, I Gede
Telematika Vol 18, No 2: August (2025)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v18i2.3088

Abstract

Violence and robbery are two common forms of crime that often cause material losses, psychological trauma, and insecurity within society. Conventional CCTV systems are limited in preventing such incidents, which highlights the need for more intelligent and responsive security solutions. The primary objective of this research is to design and evaluate SmartGuard, a real-time detection system for violence and robbery based on artificial intelligence (AI) using the YOLOv5 algorithm, integrated with Internet of Things (IoT) technology for remote monitoring. This study employed an experimental design with several stages: dataset preparation, model training, testing, model analysis, and system integration with Raspberry Pi, Firebase, and a mobile application. The dataset consisted of 6,900 labeled images across three classes: violence, robbery, and normal activity. Model evaluation was conducted using a separate test dataset and analyzed with a confusion matrix. The results show that the model achieved an overall accuracy of 70.94%. The system performed relatively well in detecting violence, with a precision of 71.13% and an F1-score of 62.47%. However, recall values for robbery (47.53%) and normal activity (48.99%) were considerably lower, indicating challenges in consistently recognizing these classes. Despite these limitations, SmartGuard allows users to view and receive notifications in emergency situations, enabling them to take quick action and monitor the situation effectively.
Automatic Analysis of Natural Disaster Messages on Social Media Using IndoBERT and Multilingual BERT Safitri, Yasmin Dwi; Faisal, Mohammad Reza; Kartini, Dwi; Saragih, Triando Hamonangan; Abadi, Friska; Bachtiar, Adam Mukharil
Telematika Vol 18, No 2: August (2025)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v18i2.3140

Abstract

Information about natural disasters disseminated through social media can serve as an important data source for mitigation processes and early warning systems. Social media platforms, such as X (formerly known as Twitter), have become primary channels for conveying real-time information, especially during disaster emergencies. With the large amount of unstructured disaster-related text that must be processed, the main challenge is accurately filtering and classifying messages into three categories: eyewitness, non-eyewitness, and don’t know. This research aims to compare the performance of four BERT-based natural language processing models, namely IndoBERT, IndoBERT with Masked Language Modeling (MLM), Multilingual BERT, and Multilingual BERT with MLM, in classifying Indonesian-language disaster messages. The dataset used in this study was obtained from previous research and publicly available data on GitHub, consisting of annotated messages related to floods, earthquakes, and forest fires. The method applied is a deep learning approach using the hold-out technique with an 80:20 ratio for training and testing data, and the same ratio applied to split the training data into training and validation subsets, with stratification to maintain balanced class proportions. In addition, variations in batch size were explored to evaluate their effect on model performance stability. The results show that the IndoBERT model achieved the highest performance on the flood and earthquake datasets, with accuracies of 80.67% and 81.50%, respectively. Meanwhile, IndoBERT with MLM pre-training recorded the highest accuracy on the forest fire dataset, 88.33%. Overall, IndoBERT demonstrated the most consistent and superior performance across datasets compared to the other models. These findings indicate that IndoBERT has strong capabilities in understanding Indonesian disaster-related text, and the results can be used as a foundation for developing automatic classification systems to support real-time disaster monitoring and early warning applications
Artificial Intelligence in Decision Support Systems for Job Promotions Alawiah, Enok Tuti; Sunarti, Sunarti
Telematika Vol 18, No 2: August (2025)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v18i2.3169

Abstract

Educational Personnel have an important role in supporting the success of education. Educational Personnel have a role in carrying out administration, management, development, supervision, and technical services to support the educational process in educational units. The declining performance of education personnel at the junior high school level in West Jakarta, particularly due to the ineffectiveness of the promotion system, demonstrates the need for an objective, data-driven assessment mechanism. Education personnel play a crucial role in the administration, management, and technical services of education, thus a transparent promotion system is essential. This study aims to develop a promotion recommendation model using the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method integrated with artificial intelligence (AI) to improve the accuracy, objectivity, and efficiency of the decision-making process. The study involved 112 civil servant education personnel respondents from 53 junior high schools in eight sub-districts in West Jakarta, selected through multistage random sampling. Analysis was conducted using five main criteria: educational background, performance, technical skills, length of service, and work motivation. AI was used to automate normalization, weighting, and pattern analysis. The results showed that TOPSIS was able to produce an objective candidate ranking, with respondent R099 having the highest Closeness Coefficient (≈0.7704), making him the most suitable for promotion. The integration of TOPSIS and AI has been proven to increase analysis speed, reduce human bias, and provide more consistent and accurate recommendations for education staff promotion.
Systematic Review of Supervised Learning Models for Network Flood Detection (NFD): Trends, Performance Evaluation, and Implementation Insights Habibi, Roni; Widana, Naufal Dekha
Telematika Vol 18, No 2: August (2025)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v18i2.3183

Abstract

Due to the growing volume, speed, and sophistication of malicious traffic, Network Flood Detection (NFD), especially in the context of Distributed Denial of Service (DDoS) assaults, continues to be a crucial challenge in contemporary network security.  Supervised machine learning has been widely used to enhance the precision, scalability, and real-time detection capabilities of NFD systems.  However, current research reveals inconsistent results on the optimal supervised learning algorithm, mostly because of differences in datasets, feature engineering methods, assessment criteria, and deployment settings.  In order to assess supervised learning models applied to NFD, this study intends to do a Systematic Literature Review (SLR) utilizing the PRISMA framework. A structured search was performed via Scopus, IEEE Xplore, SpringerLink, and ScienceDirect, encompassing papers from 2019 to 2025.  40 primary papers and 16 additional articles were found to be appropriate for synthesis after an initial dataset of 516 research was reviewed using predetermined inclusion and exclusion criteria.  Algorithms, datasets, evaluation criteria, feature selection techniques, and deployment characteristics were all incorporated in the data extraction process. According to the review, models like Random Forest, XGBoost, K-Nearest Neighbor, and Support Vector Machine regularly perform well, with accuracy ranging from 92% to 99%, depending on preprocessing methods and dataset features.  Common problems highlighted include dataset imbalance, lack of real-time adaptation, and insufficient generalization to unforeseen assault types. The results show that supervised learning is still a promising method for NFD, particularly when combined with balanced datasets, hybrid or ensemble model techniques, and optimized feature engineering.  To increase real-time resilience against changing network threats, further research is urged to incorporate deep learning, lightweight edge models, and adaptive learning frameworks.
Enhancing the GLANCE Framework for Line-Level Defect Prediction: An Empirical Study of Semantically-Aware Metrics and Non-Linear Classifiers Mujaddid, Zahid; Utami, Ema
Telematika Vol 18, No 2: August (2025)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v18i2.3196

Abstract

Line-level defect prediction (LLDP) is critical for reducing software maintenance costs, yet its industrial adoption is often hindered by high false alarm rates that erode developer trust. While the state-of-the-art GLANCE-LR framework offers a lightweight solution, it relies on linear classifiers and purely syntactic heuristics, failing to capture the non-linear defect patterns and semantic risks associated with complex code constructs. To bridge the gap between operational efficiency and semantic awareness, this paper proposes GLANCE++, an enhanced framework that integrates a non-linear LightGBM classifier for refined file-level filtering and introduces three semantically-aware line metrics: Cognitive Complexity Score (CCS), API-Weighted Number of Function Calls (AW-NFC), and Variable-Write Count (VWC). These metrics shift the prediction paradigm from counting tokens to modeling "code risk." Empirical evaluation on 19 open-source Java projects (142 releases) reveals that while the non-linear file classifier yields marginal gains, the semantic line-level metrics achieve statistically significant improvements in precision and False Alarm Rate (FAR). However, this increased selectivity introduces a trade-off, resulting in reduced recall compared to the baseline. Our findings demonstrate that improving the semantic intelligence of heuristics yields far greater impact than increasing model complexity. This suggests that future LLDP research should prioritize theoretically grounded risk metrics over computationally expensive deep learning architectures to ensure practical deployment in real-time CI/CD pipelines.
Functional Evaluation of the Logia Dashboard Using Boundary Value Testing and Cause-Effect Graph Techniques Ramadhan, Muhammad Rizky Aulia; Abadi, Friska; Nugrahadi, Dodon Turianto; Saputro, Setyo Wahyu; Herteno, Rudy
Telematika Vol 18, No 2: August (2025)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v18i2.3121

Abstract

The Logia Dashboard is a web-based information system used to manage rehabilitation plant data on post-mining land. As an alpha-stage system, Logia requires thorough functional and performance evaluation to ensure that all input validations, logical processes, and system responses operate correctly before wider implementation. This study aims to evaluate the functional reliability and performance of the Logia Dashboard by applying a combined approach of Boundary Value Testing (BVT) and Cause-Effect Graph (CEG) techniques, supported by performance testing using Google Lighthouse. The research design adopts a black-box testing approach. BVT is applied to validate input boundaries on critical features, including login, data editing, QR code generation, and account creation. Meanwhile, CEG is used to model logical relationships between input conditions and system outputs to generate systematic test cases. A total of 39 optimized functional test cases were executed in a controlled local environment. Performance testing was conducted using Lighthouse by measuring key metrics such as First Contentful Paint (FCP), Largest Contentful Paint (LCP), Total Blocking Time (TBT), and Cumulative Layout Shift (CLS). The functional testing results show that 37 out of 39 test cases passed, yielding a success rate of 94.87%. Two failed cases were identified in the login feature, indicating weaknesses in input validation feedback. Performance testing produced an average Lighthouse score of 97, demonstrating that the system has excellent load speed and interface stability, although minor layout instability was detected on certain pages. These results indicate that the combined application of BVT and CEG is effective for detecting boundary-related and logical input errors in alpha-stage web systems. The findings also provide concrete recommendations for improving login validation and interface stability, supporting further development of the Logia Dashboard toward a more reliable and robust system for post-mining land management.
Comparative Analysis of Support Vector Machine and IndoBERT Algorithms in Stance Detection on Political Issues in Social Media X: A Case Study of BPI Danantara Taniputra, Dhammananda; Beny, Beny; Lucia, Lidwina Demai; Velysha, Rachel
Telematika Vol 18, No 2: August (2025)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v18i2.3219

Abstract

Stance detection is an NLP task aimed at identifying and classifying a writer’s attitude toward a topic as supportive, opposing, or neutral based on text analysis, providing deeper insights into public opinion and supporting data-driven decision-making. This study focuses on Indonesian society’s stance toward the National Investment Management Agency (BPI Danantara), which has received positive responses for its economic potential as well as negative reactions due to concerns over governance and corruption risks. In this research, a machine learning approach using the Support Vector Machine algorithm and a deep learning approach using the IndoBERT model were applied to detect pro, contra, and neutral stances in posts from the X social media platform. A total of 6,805 tweets were collected through scraping and manually labeled by three annotators. The dataset was then processed through cleaning, undersampling, and modeling, and evaluated using accuracy, precision, recall, F1-score, and ROC-AUC metrics. Experiments were conducted across various scenarios, including binary and three-class classification as well as balanced and imbalanced datasets, to assess the effectiveness of each model. The results indicate that IndoBERT consistently outperforms SVM across all scenarios, particularly in capturing nuanced stances in Indonesian text. However, statistical evaluation using the paired t-test and the Wilcoxon signed-rank test reveals that the performance differences between the two models are generally not statistically significant, except in the three-class classification scenario with undersampling, where IndoBERT shows a significant advantage in handling balanced multi-class stance detection. These findings demonstrate the advantage of Transformer-based approaches for complex stance detection tasks and highlight their potential for developing automated public opinion monitoring systems. Nevertheless, this study has limitations, including the relatively small dataset, the focus on a single social media platform, and the methods applied. Future research could explore larger and more diverse datasets, incorporate multiple social media platforms, and employ other Transformer-based models to enhance generalization and improve stance detection accuracy.
Addressing Algorithmic Bias and Data Privacy in Human Resource Management Herdiana, Hendi; Munir, Munir; Hurriyati, Ratih; Sultan, Mokh. Adib; Tua, Frans David; Ergashevna, Buriyeva Kibrio
Telematika Vol 18, No 2: August (2025)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v18i2.3177

Abstract

Artificial intelligence (AI) has transformed Human Resource Management (HRM) by automating recruitment, enhancing performance evaluation, and enabling data-driven workforce planning. However, its adoption raises critical concerns related to algorithmic bias, data privacy, and employee trust, creating a significant gap in understanding how these technical and ethical dimensions interact. This study aims to synthesize current evidence on the impact of AI on HRM functions, the challenges associated with fairness and privacy, and employee perceptions of AI-enabled HRM systems. A Systematic Literature Review (SLR) was conducted following PRISMA 2020 guidelines and structured using the PICOC framework. Searches across major scientific databases identified 1,042 records, of which 35 peer-reviewed studies published between 2020 and 2025 met all eligibility criteria. The findings show that AI enhances HRM efficiency and decision quality but presents recurring risks of algorithmic bias, opaque decision-making, and weak data governance. Employee perceptions of fairness, transparency, and privacy strongly influence trust and acceptance of AI-based HRM systems. The review concludes that effective AI adoption requires socio-technical integration combining algorithmic capability with robust governance and ethical safeguards. The study introduces an integrated conceptual model linking AI capabilities, HRM functions, data governance, employee trust, and organizational outcomes—representing a key theoretical contribution and a novel synthesis of previously fragmented research.

Page 1 of 1 | Total Record : 9