cover
Contact Name
Johan Reimon Batmetan
Contact Email
garuda@apji.org
Phone
+6285885852706
Journal Mail Official
danang@stekom.ac.id
Editorial Address
Jl. Majapahit No.304, Pedurungan Kidul, Kec. Pedurungan, Semarang, Provinsi Jawa Tengah, 52361
Location
Kota semarang,
Jawa tengah
INDONESIA
Journal of Technology Informatics and Engineering
ISSN : 29619068     EISSN : 29618215     DOI : 10.51903
Core Subject : Science,
Power Engineering Telecommunication Engineering Computer Engineering Control and Computer Systems Electronics Information technology Informatics Data and Software engineering Biomedical Engineering
Articles 161 Documents
Affective Gesture Recognition in Virtual Reality Using LSTM-CNN Fusion for Emotion-Adaptive Interaction Gupta, Soonya; Kumar, Deepa; Sharma, Shiva
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.278

Abstract

Emotion recognition in Virtual Reality (VR) has become increasingly relevant for enhancing immersive user experiences and enabling emotionally responsive interactions. Traditional approaches that rely on facial expressions or vocal cues often face limitations in VR environments due to occlusion by head-mounted displays and restricted audio inputs. This study aims to develop an emotion recognition model based on body gestures using a hybrid deep learning architecture combining Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM). The CNN component extracts spatial features from skeletal data, while the LSTM processes the temporal dynamics of the gestures. The proposed model was trained and evaluated using a benchmark VR gesture-emotion dataset annotated with five distinct emotional states: happy, sad, angry, neutral, and surprised. Experimental results show that the CNN-LSTM model achieved an overall accuracy of 89.4%, with precision and recall scores of 88.7% and 87.9%, respectively. These findings demonstrate the model’s ability to generalize across various gesture patterns with high reliability. The integration of spatial and temporal features proves effective in capturing subtle emotional expressions conveyed through movement. The contribution of this research lies in offering a robust and non-intrusive method for emotion detection tailored to immersive VR settings. The model opens potential applications in virtual therapy, training simulations, and affective gaming, where real-time emotional feedback can significantly enhance system adaptiveness and user engagement. Future work will explore real-time implementation, multimodal sensor fusion, and advanced architectures, such as attention mechanisms for further performance improvements
Real-Time Enhancement of Low-Light Images Using Generative Adversarial Networks (GANs) Zhou, Feng; Qiao, Ying; Li, Quanmin
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.279

Abstract

Low-light image enhancement plays a crucial role in fields such as surveillance, photography, and medical imaging, where inadequate lighting significantly reduces image quality, leading to loss of detail and increased noise. Traditional enhancement methods, such as histogram equalization and Retinex, struggle to preserve fine details and often amplify noise, limiting their effectiveness in real-world applications. To address these issues, this study proposes a Generative Adversarial Networks (GANs)-based model to enhance low-light images in real-time while maintaining high visual fidelity. The model aims to improve contrast, reduce noise, and retain image structure more effectively than conventional methods. The proposed GAN model is trained using the LOL and SID datasets and evaluated using the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM). Experimental results show that the method achieves a PSNR of 28.4 dB and SSIM of 0.91, outperforming histogram equalization (PSNR: 18.5 dB, SSIM: 0.65) and Retinex (PSNR: 20.3 dB, SSIM: 0.72). Although the model operates in real-time, its inference time of 35.6 ms per image suggests further optimization to support edge computing applications. This study demonstrates that GAN-based enhancement significantly improves low-light images by preserving structural integrity while reducing noise. Future research should focus on optimizing the model for faster processing, experimenting with larger and more diverse datasets, and integrating the system into real-world applications such as automated surveillance and smart camera technologies.
Decentralized AI on The Edge: Implementing Federated Learning for Predictive Maintenance in Industrial IoT Systems Supriadi, Candra; Wahyudi , Wiwid; Priyadi, Agus; Jin, Kim So
Journal of Technology Informatics and Engineering Vol. 4 No. 2 (2025): AUGUST | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i2.281

Abstract

The integration of Artificial Intelligence (AI) into Industrial Internet of Things (IIoT) systems has enhanced predictive maintenance strategies by enabling early detection of faults in machinery. However, centralized AI models often face challenges related to data privacy, latency, and communication overhead in industrial environments. This study aims to develop a decentralized AI framework utilizing Federated Learning (FL) on edge devices to enhance predictive maintenance in a medium-scale manufacturing plant. The proposed system enables local edge nodes to collaboratively train machine learning models without sharing raw data, thereby preserving data privacy and reducing network load. A prototype was developed using embedded edge devices integrated with vibration and temperature sensors to detect machine anomalies. Federated averaging was used to aggregate local models into a global model. Experimental results show that the federated model achieved 91.4% accuracy in anomaly detection, comparable to centralized approaches, while significantly reducing data transmission volume by 68%. This research demonstrates the feasibility of deploying federated learning on resource-constrained edge devices for predictive maintenance in IIoT environments. The findings suggest that decentralized AI at the edge can offer efficient, privacy-preserving, and scalable solutions for industrial applications
Integrative Deep Learning Architecture for High-Accuracy Medical Image Segmentation: Combining U-Net, ResNet, and Transformers Sholekhah, Devi Zakiyatus; Noviar, Dian
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.288

Abstract

Medical image segmentation plays a vital role in diagnosis and treatment planning by extracting clinically relevant information from imaging data. Conventional methods often struggle with variations in anatomical structure and imaging quality, leading to suboptimal segmentation. Recent advancements in Deep Learning, particularly Convolutional Neural Networks (CNNs) and Transformers, have improved segmentation accuracy; however, individual models such as U-Net, ResNet, and Transformer still face limitations in preserving spatial details, extracting deep features, and modeling long-range dependencies. This study proposes a hybrid Deep Learning model that integrates U-Net, ResNet, and Transformer to overcome these challenges and enhance segmentation performance. The proposed hybrid model was evaluated on several publicly available datasets, including BraTS, ISIC, and DRIVE, using Dice Similarity Coefficient (DSC) and Intersection over Union (IoU) as performance metrics. Experimental results indicate that the hybrid model achieved a DSC of 0.92 and an IoU of 0.86, outperforming U-Net (DSC: 0.82, IoU: 0.75), ResNet (DSC: 0.85, IoU: 0.78), and Transformer (DSC: 0.88, IoU: 0.80). Additionally, the model maintained an inference time of 55 ms per image, demonstrating its potential for real-time applications. This study highlights the benefits of combining CNN-based and Transformer-based architectures to capture both local details and global context, providing an effective and efficient solution for medical image segmentation.
AI-Driven Adaptive Radar Systems for Real-Time Target Tracking in Urban Environments Ghofur, Muhammad Jamal Udin; Riyanto, Eko
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.289

Abstract

Radar systems play a crucial role in target tracking within urban environments, where challenges such as clutter, multipath effects, and electromagnetic interference significantly impact detection accuracy. Traditional radar methods often struggle to adapt to dynamic urban conditions, leading to decreased reliability in real-time target tracking. This study aims to develop and evaluate an AI-driven adaptive radar system that enhances tracking accuracy in urban settings. The research employs a quantitative approach using simulations to model radar signal processing under various environmental conditions. The AI model, based on Convolutional Neural Networks (CNN), is trained to optimize radar performance by filtering out noise and dynamically adjusting detection parameters. The results indicate that the AI-based radar system achieves a tracking accuracy of 95.2%, significantly outperforming traditional radar systems, which only reach 80% accuracy. Additionally, the AI-enhanced radar reduces response time to 120 milliseconds, compared to 250 milliseconds in conventional systems, demonstrating improved real-time processing capabilities. The system also exhibits greater resilience to high-clutter environments, maintaining stable target detection despite signal interference. These findings highlight the potential of AI in enhancing radar functionality for applications such as surveillance, traffic monitoring, and security. Future research should focus on integrating AI-driven radar with real-world radar hardware, exploring multi-sensor fusion, and refining adaptive learning techniques to further optimize tracking performance in complex environments
Hybrid Explainable AI (XAI) Framework for Detecting Adversarial Attacks in Cyber-Physical Systems Taufik, Mohammad; Aziz, Mohammad Saddam; Fitriana, Aulia
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.295

Abstract

Cyber-Physical Systems (CPS) are increasingly deployed in critical infrastructure yet remain vulnerable to adversarial attacks that manipulate sensor data to mislead AI-based decision-making. These threats demand not only high-accuracy detection but also transparency in model reasoning. This study proposes a Hybrid Explainable AI (XAI) Framework that integrates Convolutional Neural Networks (CNN), SHAP-based feature interpretation, and rule-based reasoning to detect adversarial inputs in CPS environments. The framework is tested on two simulation scenarios: industrial sensor networks and autonomous traffic sign recognition. Using datasets of 10,000 samples (50% adversarial via FGSM and PGD), the model achieved an accuracy of 97.25%, precision of 96.80%, recall of 95.90%, and F1-score of 96.35%. SHAP visualizations effectively distinguished between normal and adversarial inputs, and the added explainability module increased inference time by only 8.5% over the baseline CNN (from 18.5 ms to 20.1 ms), making it suitable for real-time CPS deployment. Compared to prior methods (e.g., CNN + Grad-CAM, Random Forest + LIME), the proposed hybrid framework demonstrates superior performance and interpretability. The novelty of this work lies in its tri-level integration of predictive accuracy, explainability, and rule-based logic within a single real-time detection system—an approach not previously applied in CPS adversarial defense. This research contributes toward trustworthy AI systems that are robust, explainable, and secure by design.
Blockchain Based Zero Knowledge Proof Protocol For Privacy Preserving Healthcare Data Sharing Myeong, Go Eun; Ram, Kim Sa
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.296

Abstract

The rise of digital healthcare has intensified concerns over data privacy, particularly in cross-institutional medical data exchanges. This study introduces a blockchain-based protocol leveraging Zero-Knowledge Proofs (ZKP), specifically zk-SNARK, to enable verifiable yet privacy-preserving health data sharing. Built on a permissioned Ethereum blockchain, the protocol ensures that medical data validity can be confirmed without disclosing sensitive content. System implementation involves Python-based zk-circuits, smart contracts in Solidity, and RESTful APIs supporting HL7 FHIR formats for interoperability. Performance evaluations show promising results: proof verification times remained under 100 ms, with average proof sizes below 2 KB, even under complex transaction scenarios. Gas consumption analysis indicates a trade-off—ZKP-enabled transactions consumed approximately 93,000 gas units, compared to 52,800 in baseline cases. Interoperability testing across 10 FHIR-based scenarios resulted in 100% parsing success and an average data integration time of 1.7 seconds. Security assessments under white-box threat models confirmed that sensitive information remains unreconstructable, preserving patient confidentiality. Compared to previous implementations using zk-STARK, this protocol offers a 30% improvement in verification efficiency and a 45% reduction in proof size. The novelty lies in combining lightweight ZKP mechanisms with an interoperability-focused design, tailored for realistic hospital infrastructures. This research delivers a scalable, standards-compliant architecture poised to advance secure digital healthcare ecosystems while complying with regulations like GDPR
Optimization of Smart Home Energy Consumption Using Machine Learning-Based Load Forecasting Rudiyanto, Arif Rifan; Satria, Bagas Panji ; Panjaitan, Haposan Daniel
Journal of Technology Informatics and Engineering Vol. 4 No. 2 (2025): AUGUST | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i2.437

Abstract

The growing demand for energy efficiency in smart homes necessitates accurate short-term load forecasting to enable adaptive scheduling and optimal resource allocation. Traditional forecasting models, such as Random Forest, have demonstrated limited capability in capturing sequential dependencies, especially under fluctuating consumption behaviors typical of residential environments. This study aims to compare the forecasting performance of RF and Long Short-Term Memory (LSTM) models in predicting household energy consumption, to identify the most suitable approach for intelligent energy management systems. A quantitative experimental design was adopted using a publicly available dataset, which underwent preprocessing including time normalization and unit conversion. Both models were evaluated using Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) to assess forecasting accuracy. The LSTM model achieved a lower MAE of 3.2 and RMSE of 4.1, significantly outperforming the RF model, which recorded an MAE of 6.5 and RMSE of 8.4. Additionally, during peak load conditions, LSTM achieved 89.7% accuracy, compared to 72.4% for RF, further emphasizing its superior adaptability to time-sensitive variations. The results confirm that LSTM is more effective in modeling temporal patterns and handling volatility in household energy usage. This research contributes to the field by reinforcing the applicability of deep learning for real-time energy forecasting, offering valuable insights for the development of smart home systems. Future studies may expand this work by integrating hybrid optimization techniques and exploring multi-household scenarios for broader scalability.
Enhancing Performance Using New Hybrid Intrusion Detection System Candra Supriadi; Charli Sitinjak; Fujiama Diapoldo Silalahi; Nia Dharma Pertiwi; Sigit Umar Anggono
Journal of Technology Informatics and Engineering Vol. 1 No. 2 (2022): August: Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v1i2.134

Abstract

Intrusion Detection Systems (IDS) are an efficient defense against network attacks as well as host attacks as they allow network/host administrators to detect any policy violations. However, traditional IDS are vulnerable and unreliable for new malicious and genuine attacks. In other case, it is also inefficient to analyze large amount of data such as possibility logs. Furthermore, for typical OS, there are a lot of false positives and false negatives. There are some techniques to increase the quality and result of IDS where data mining is one of technique that is important to mining the information that useful from a large amount of data which noisy and random. The purpose of this study is to combine three technique of data mining to reduce overhead and to improve efficiency in intrusion detection system (IDS). The combination of clustering (Hierarchical) and two categories (C5, CHAID) is proposed in this study. The designed IDS is evaluated against the KDD'99 standard Data set (Knowledge Discovery and Data Mining), which is used to evaluate the efficacy of intrusion detection systems. The suggested system can detect intrusions and categorize them into four categories: probe, DoS, U2R (User to Root), and R2L (Remote to Local). The good performance of IDS in case of accuracy and efficiency was the result of this study.
A DIGITAL PRINTING APPLICATION AS AN EXPRESSION IDENTIFICATION SYSTEM. Arman Arman; Prasetya Prasetya; Feny Nurvita Arifany; Fertilia Budi Pradnyaparamita; Joni Laksito
Journal of Technology Informatics and Engineering Vol. 1 No. 2 (2022): August: Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v1i2.135

Abstract

Human Computer Interaction (HCI), a growing research field in science and engineering, aims to provide a natural way for humans to use computers as tools. Humans prefer to interact with each other mainly through speech, but also through facial expressions and gestures, for certain parts of the speech and displays of emotions. The identity, age, gender, and emotional state of a person can be obtained from his face. The impression we receive from the expression reflected on the face affects our interpretation of the spoken word and even our attitude towards the speaker himself. Although emotion recognition is an easy task for humans, it still proves to be a difficult task for computers to recognize user`s emotional state. Advances in this area promise to arm our technological environment by means for more effective interactions with humans, and hopefully the impact of facial expressions on cognition will increase rapidly in the future. Will do. In recent years, the adoption of digital has increased rapidly, and the quality has improved significantly. Digital printing has resulted in fast delivery and needs-based costs. This article describes a sophisticated combination classifier approach, an empirical study of ensembles, stacking, and voting. These three approaches were tested on Nave Bayes (NB), Kernel Naive Bayes (kNB), Neural Network (NN), Auto MultiLayer Perceptron (Auto MLP), and Decision Tree (DT), respectively. The main contribution of this paper is the improvement of the classification accuracy of facial expression recognition tasks. In both persondependent and nonpersondependent experiments we showed that using a combination of these classifier combinations gave significantly better results than using individual classifiers. It has been observed from experiments that the overall voting technique by voting achieves the best classification accuracy.

Page 9 of 17 | Total Record : 161


Filter by Year

2022 2025


Filter By Issues
All Issue Vol. 4 No. 3 (2025): DECEMBER | JTIE : Journal of Technology Informatics and Engineering Vol. 4 No. 2 (2025): AUGUST | JTIE : Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering Vol. 3 No. 3 (2024): December (Special Issue: Big Data Analytics) | JTIE: Journal of Technology Info Vol 3 No 2 (2024): Agustus : Journal of Technology Informatics and Engineering Vol. 3 No. 2 (2024): Agustus : Journal of Technology Informatics and Engineering Vol 3 No 1 (2024): April : Journal of Technology Informatics and Engineering Vol. 3 No. 1 (2024): April : Journal of Technology Informatics and Engineering Vol 2 No 3 (2023): December : Journal of Technology Informatics and Engineering Vol. 2 No. 3 (2023): December : Journal of Technology Informatics and Engineering Vol. 2 No. 2 (2023): August : Journal of Technology Informatics and Engineering Vol 2 No 2 (2023): August : Journal of Technology Informatics and Engineering Vol 2 No 1 (2023): April : Journal of Technology Informatics and Engineering Vol. 2 No. 1 (2023): April : Journal of Technology Informatics and Engineering Vol 1 No 3 (2022): December: Journal of Technology Informatics and Engineering Vol. 1 No. 3 (2022): December: Journal of Technology Informatics and Engineering Vol 1 No 3 (2022): Desember: Journal of Technology Informatics and Engineering Vol. 1 No. 2 (2022): August: Journal of Technology Informatics and Engineering Vol 1 No 2 (2022): August: Journal of Technology Informatics and Engineering Vol 1 No 2 (2022): Agustus: Journal of Technology Informatics and Engineering Vol 1 No 1 (2022): April: Journal of Technology Informatics and Engineering Vol. 1 No. 1 (2022): April: Journal of Technology Informatics and Engineering More Issue