cover
Contact Name
Johan Reimon Batmetan
Contact Email
garuda@apji.org
Phone
+6285885852706
Journal Mail Official
danang@stekom.ac.id
Editorial Address
Jl. Majapahit No.304, Pedurungan Kidul, Kec. Pedurungan, Semarang, Provinsi Jawa Tengah, 52361
Location
Kota semarang,
Jawa tengah
INDONESIA
Journal of Technology Informatics and Engineering
ISSN : 29619068     EISSN : 29618215     DOI : 10.51903
Core Subject : Science,
Power Engineering Telecommunication Engineering Computer Engineering Control and Computer Systems Electronics Information technology Informatics Data and Software engineering Biomedical Engineering
Articles 10 Documents
Search results for , issue "Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering" : 10 Documents clear
Computational Fluid Dynamics (CFD) Optimization in Smart Factories: AI-Based Predictive Modelling Ibrahim, Said Maulana; Najmi, M. Ikhwan
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.264

Abstract

In the era of Industry 4.0, optimizing fluid flow systems in smart factories is essential to improve energy efficiency and operational stability. Traditional Computational Fluid Dynamics (CFD) simulations provide accurate fluid flow analysis but require extensive computational resources and long processing times, making real-time applications challenging. To address this limitation, this study aims to develop an AI-based predictive model for CFD simulations, utilizing Convolutional Neural Networks (CNN) and Extreme Gradient Boosting (XGBoost) to accelerate the estimation of fluid flow characteristics in industrial environments. The research methodology involves generating CFD simulation datasets, preprocessing data, and training AI models to predict key fluid parameters such as pressure, velocity, and temperature. The evaluation results show that CNN achieves a Mean Squared Error (MSE) of 0.0025 and a Root Mean Squared Error (RMSE) of 0.05, outperforming XGBoost, which records an MSE of 0.0030 and an RMSE of 0.055. Moreover, CNN predicts fluid dynamics in just 15.2 seconds, while XGBoost achieves results in 10.5 seconds, compared to the 1200.5 seconds required by traditional CFD simulations. These findings highlight the potential of AI in reducing computation time by over 98%, making real-time fluid flow analysis feasible in industrial settings. This study contributes to the advancement of AI-integrated CFD modeling, demonstrating that AI can significantly enhance the efficiency of fluid dynamics analysis without compromising accuracy. Future research should focus on expanding AI models to handle more complex flow conditions and integrating AI with smart factory design tools for real-time optimization
Semantic Role Labeling in Neural Machine Translation Addressing Polysemy and Ambiguity Challenges Qin, Yan
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.274

Abstract

The persistent challenges of polysemy and ambiguity continue to hinder the semantic accuracy of Neural Machine Translation (NMT), particularly in language pairs with distinct syntactic structures. While transformer-based models such as BERT and GPT have achieved notable progress in capturing contextual word meanings, they still fall short in understanding explicit semantic roles. This study aims to address this limitation by integrating Semantic Role Labeling (SRL) into a Transformer-based NMT framework to enhance semantic comprehension and reduce translation errors. Using a parallel corpus of 100,000 English-Indonesian and English-Japanese sentence pairs, the proposed SRL-enhanced NMT model was trained and evaluated against a baseline Transformer NMT. The integration of SRL enabled the model to annotate semantic roles, such as agent, patient, and instrument, which were fused with encoder representations through semantic-aware attention mechanisms. Experimental results demonstrate that the SRL-integrated model significantly outperformed the standard NMT model, improving BLEU scores by 6.2 points (from 32.5 to 38.7), METEOR scores by 6.3 points (from 58.5 to 64.8), and reducing the TER by 5.8 points (from 45.1 to 39.3). These results were statistically validated using a paired t-test (p < 0.05). Furthermore, qualitative analyses confirmed SRL's effectiveness in resolving lexical ambiguities and syntactic uncertainties. Although SRL integration increased inference time by 12%, the performance trade-off was deemed acceptable for applications requiring higher semantic fidelity. The novelty of this research lies in the architectural fusion of SRL with transformer-based attention layers in NMT, a domain seldom explored in prior studies. Moreover, the model demonstrates robust performance across linguistically divergent language pairs, suggesting its broader applicability. This work contributes to the advancement of semantically aware translation systems and paves the way for future research in unsupervised SRL integration and multilingual scalability.
Lightweight Deepfake Detection on Mobile Devices Using Attention-Enhanced MobileNet and Frequency Domain Analysis Amen, Mohammad; Ranam, Mohammed Lauwl
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.275

Abstract

The rapid advancement of deepfake technology has raised significant concerns regarding misinformation, privacy breaches, and digital fraud. Existing deepfake detection models, particularly those based on deep learning, often require high computational resources, making them unsuitable for real-time applications on mobile devices. This study aims to develop a lightweight deepfake detection model that enhances accuracy while maintaining computational efficiency. To achieve this, we propose a hybrid approach that integrates Fast Fourier Transform (FFT), MobileNet, and an Attention mechanism. The FFT component enables frequency-domain analysis to detect subtle deepfake artifacts, while MobileNet provides a lightweight convolutional backbone, and the Attention layer enhances feature extraction. The proposed model was evaluated on a benchmark deepfake dataset, and the results demonstrated its superior performance compared to the standard MobileNet model. Specifically, the model achieved an accuracy of 94.2%, an F1-score of 93.8%, and a computational efficiency improvement of 27.5% in comparison to conventional CNN-based approaches. These findings indicate that the integration of FFT and Attention mechanisms significantly enhances the model's capability to distinguish real and manipulated media while reducing computational overhead. The contribution of this study lies in presenting a deepfake detection model that balances accuracy and efficiency, making it suitable for deployment in mobile and resource-constrained environments. Future research should explore further optimization for energy efficiency, the adoption of lightweight Transformer architectures, and extensive testing on diverse datasets to improve robustness against real-world variations.
Transfer Learning Approach for Sentiment Analysis in Low-Resource Austronesian Languages Using Multilingual BERT Hao, Li Wen; Liu, Robert Kuan
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.276

Abstract

Sentiment analysis for low-resource languages, particularly Austronesian languages, remains challenging due to the limited availability of annotated datasets. Traditional approaches often struggle to achieve high accuracy, necessitating strategies like cross-lingual transfer and data augmentation. While multilingual models such as mBERT offer promising results, their performance heavily depends on fine-tuning techniques. This study aims to improve sentiment analysis for Austronesian languages by fine-tuning mBERT with augmented training data. The proposed method leverages cross-lingual transfer learning to enhance model robustness, addressing the scarcity of labeled data. Experiments were conducted using a dataset enriched with augmentation techniques such as back-translation and synonym replacement. The fine-tuned mBERT model achieved an accuracy of 92%, outperforming XLM-RoBERTa at 91.41%, while mT5 obtained the highest accuracy at 99.61%. Improvements in precision, recall, and F1-score further validated the model’s effectiveness in capturing subtle sentiment variations. These findings demonstrate that combining data augmentation and cross-lingual strategies significantly enhances sentiment classification for underrepresented languages. This study contributes to the development of scalable Natural Language Processing (NLP) models for Austronesian languages. Future research should explore larger and more diverse datasets, optimize real-time implementations, and extend the approach to tasks such as Named Entity Recognition (NER) and machine translation. The promising results underscore the importance of integrating robust transfer learning techniques with comprehensive data augmentation to overcome challenges in resource-limited NLP scenarios
Affective Gesture Recognition in Virtual Reality Using LSTM-CNN Fusion for Emotion-Adaptive Interaction Gupta, Soonya; Kumar, Deepa; Sharma, Shiva
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.278

Abstract

Emotion recognition in Virtual Reality (VR) has become increasingly relevant for enhancing immersive user experiences and enabling emotionally responsive interactions. Traditional approaches that rely on facial expressions or vocal cues often face limitations in VR environments due to occlusion by head-mounted displays and restricted audio inputs. This study aims to develop an emotion recognition model based on body gestures using a hybrid deep learning architecture combining Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM). The CNN component extracts spatial features from skeletal data, while the LSTM processes the temporal dynamics of the gestures. The proposed model was trained and evaluated using a benchmark VR gesture-emotion dataset annotated with five distinct emotional states: happy, sad, angry, neutral, and surprised. Experimental results show that the CNN-LSTM model achieved an overall accuracy of 89.4%, with precision and recall scores of 88.7% and 87.9%, respectively. These findings demonstrate the model’s ability to generalize across various gesture patterns with high reliability. The integration of spatial and temporal features proves effective in capturing subtle emotional expressions conveyed through movement. The contribution of this research lies in offering a robust and non-intrusive method for emotion detection tailored to immersive VR settings. The model opens potential applications in virtual therapy, training simulations, and affective gaming, where real-time emotional feedback can significantly enhance system adaptiveness and user engagement. Future work will explore real-time implementation, multimodal sensor fusion, and advanced architectures, such as attention mechanisms for further performance improvements
Real-Time Enhancement of Low-Light Images Using Generative Adversarial Networks (GANs) Zhou, Feng; Qiao, Ying; Li, Quanmin
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.279

Abstract

Low-light image enhancement plays a crucial role in fields such as surveillance, photography, and medical imaging, where inadequate lighting significantly reduces image quality, leading to loss of detail and increased noise. Traditional enhancement methods, such as histogram equalization and Retinex, struggle to preserve fine details and often amplify noise, limiting their effectiveness in real-world applications. To address these issues, this study proposes a Generative Adversarial Networks (GANs)-based model to enhance low-light images in real-time while maintaining high visual fidelity. The model aims to improve contrast, reduce noise, and retain image structure more effectively than conventional methods. The proposed GAN model is trained using the LOL and SID datasets and evaluated using the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM). Experimental results show that the method achieves a PSNR of 28.4 dB and SSIM of 0.91, outperforming histogram equalization (PSNR: 18.5 dB, SSIM: 0.65) and Retinex (PSNR: 20.3 dB, SSIM: 0.72). Although the model operates in real-time, its inference time of 35.6 ms per image suggests further optimization to support edge computing applications. This study demonstrates that GAN-based enhancement significantly improves low-light images by preserving structural integrity while reducing noise. Future research should focus on optimizing the model for faster processing, experimenting with larger and more diverse datasets, and integrating the system into real-world applications such as automated surveillance and smart camera technologies.
Integrative Deep Learning Architecture for High-Accuracy Medical Image Segmentation: Combining U-Net, ResNet, and Transformers Sholekhah, Devi Zakiyatus; Noviar, Dian
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.288

Abstract

Medical image segmentation plays a vital role in diagnosis and treatment planning by extracting clinically relevant information from imaging data. Conventional methods often struggle with variations in anatomical structure and imaging quality, leading to suboptimal segmentation. Recent advancements in Deep Learning, particularly Convolutional Neural Networks (CNNs) and Transformers, have improved segmentation accuracy; however, individual models such as U-Net, ResNet, and Transformer still face limitations in preserving spatial details, extracting deep features, and modeling long-range dependencies. This study proposes a hybrid Deep Learning model that integrates U-Net, ResNet, and Transformer to overcome these challenges and enhance segmentation performance. The proposed hybrid model was evaluated on several publicly available datasets, including BraTS, ISIC, and DRIVE, using Dice Similarity Coefficient (DSC) and Intersection over Union (IoU) as performance metrics. Experimental results indicate that the hybrid model achieved a DSC of 0.92 and an IoU of 0.86, outperforming U-Net (DSC: 0.82, IoU: 0.75), ResNet (DSC: 0.85, IoU: 0.78), and Transformer (DSC: 0.88, IoU: 0.80). Additionally, the model maintained an inference time of 55 ms per image, demonstrating its potential for real-time applications. This study highlights the benefits of combining CNN-based and Transformer-based architectures to capture both local details and global context, providing an effective and efficient solution for medical image segmentation.
AI-Driven Adaptive Radar Systems for Real-Time Target Tracking in Urban Environments Ghofur, Muhammad Jamal Udin; Riyanto, Eko
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.289

Abstract

Radar systems play a crucial role in target tracking within urban environments, where challenges such as clutter, multipath effects, and electromagnetic interference significantly impact detection accuracy. Traditional radar methods often struggle to adapt to dynamic urban conditions, leading to decreased reliability in real-time target tracking. This study aims to develop and evaluate an AI-driven adaptive radar system that enhances tracking accuracy in urban settings. The research employs a quantitative approach using simulations to model radar signal processing under various environmental conditions. The AI model, based on Convolutional Neural Networks (CNN), is trained to optimize radar performance by filtering out noise and dynamically adjusting detection parameters. The results indicate that the AI-based radar system achieves a tracking accuracy of 95.2%, significantly outperforming traditional radar systems, which only reach 80% accuracy. Additionally, the AI-enhanced radar reduces response time to 120 milliseconds, compared to 250 milliseconds in conventional systems, demonstrating improved real-time processing capabilities. The system also exhibits greater resilience to high-clutter environments, maintaining stable target detection despite signal interference. These findings highlight the potential of AI in enhancing radar functionality for applications such as surveillance, traffic monitoring, and security. Future research should focus on integrating AI-driven radar with real-world radar hardware, exploring multi-sensor fusion, and refining adaptive learning techniques to further optimize tracking performance in complex environments
Hybrid Explainable AI (XAI) Framework for Detecting Adversarial Attacks in Cyber-Physical Systems Taufik, Mohammad; Aziz, Mohammad Saddam; Fitriana, Aulia
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.295

Abstract

Cyber-Physical Systems (CPS) are increasingly deployed in critical infrastructure yet remain vulnerable to adversarial attacks that manipulate sensor data to mislead AI-based decision-making. These threats demand not only high-accuracy detection but also transparency in model reasoning. This study proposes a Hybrid Explainable AI (XAI) Framework that integrates Convolutional Neural Networks (CNN), SHAP-based feature interpretation, and rule-based reasoning to detect adversarial inputs in CPS environments. The framework is tested on two simulation scenarios: industrial sensor networks and autonomous traffic sign recognition. Using datasets of 10,000 samples (50% adversarial via FGSM and PGD), the model achieved an accuracy of 97.25%, precision of 96.80%, recall of 95.90%, and F1-score of 96.35%. SHAP visualizations effectively distinguished between normal and adversarial inputs, and the added explainability module increased inference time by only 8.5% over the baseline CNN (from 18.5 ms to 20.1 ms), making it suitable for real-time CPS deployment. Compared to prior methods (e.g., CNN + Grad-CAM, Random Forest + LIME), the proposed hybrid framework demonstrates superior performance and interpretability. The novelty of this work lies in its tri-level integration of predictive accuracy, explainability, and rule-based logic within a single real-time detection system—an approach not previously applied in CPS adversarial defense. This research contributes toward trustworthy AI systems that are robust, explainable, and secure by design.
Blockchain Based Zero Knowledge Proof Protocol For Privacy Preserving Healthcare Data Sharing Myeong, Go Eun; Ram, Kim Sa
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.296

Abstract

The rise of digital healthcare has intensified concerns over data privacy, particularly in cross-institutional medical data exchanges. This study introduces a blockchain-based protocol leveraging Zero-Knowledge Proofs (ZKP), specifically zk-SNARK, to enable verifiable yet privacy-preserving health data sharing. Built on a permissioned Ethereum blockchain, the protocol ensures that medical data validity can be confirmed without disclosing sensitive content. System implementation involves Python-based zk-circuits, smart contracts in Solidity, and RESTful APIs supporting HL7 FHIR formats for interoperability. Performance evaluations show promising results: proof verification times remained under 100 ms, with average proof sizes below 2 KB, even under complex transaction scenarios. Gas consumption analysis indicates a trade-off—ZKP-enabled transactions consumed approximately 93,000 gas units, compared to 52,800 in baseline cases. Interoperability testing across 10 FHIR-based scenarios resulted in 100% parsing success and an average data integration time of 1.7 seconds. Security assessments under white-box threat models confirmed that sensitive information remains unreconstructable, preserving patient confidentiality. Compared to previous implementations using zk-STARK, this protocol offers a 30% improvement in verification efficiency and a 45% reduction in proof size. The novelty lies in combining lightweight ZKP mechanisms with an interoperability-focused design, tailored for realistic hospital infrastructures. This research delivers a scalable, standards-compliant architecture poised to advance secure digital healthcare ecosystems while complying with regulations like GDPR

Page 1 of 1 | Total Record : 10


Filter by Year

2025 2025


Filter By Issues
All Issue Vol. 5 No. 1 (2026): APRIL | JTIE : Journal of Technology Informatics and Engineering Vol. 4 No. 3 (2025): DECEMBER | JTIE : Journal of Technology Informatics and Engineering Vol. 4 No. 2 (2025): AUGUST | JTIE : Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering Vol. 3 No. 3 (2024): December (Special Issue: Big Data Analytics) | JTIE: Journal of Technology Info Vol. 3 No. 2 (2024): Agustus : Journal of Technology Informatics and Engineering Vol 3 No 2 (2024): Agustus : Journal of Technology Informatics and Engineering Vol. 3 No. 1 (2024): April : Journal of Technology Informatics and Engineering Vol 3 No 1 (2024): April : Journal of Technology Informatics and Engineering Vol 2 No 3 (2023): December : Journal of Technology Informatics and Engineering Vol. 2 No. 3 (2023): December : Journal of Technology Informatics and Engineering Vol 2 No 2 (2023): August : Journal of Technology Informatics and Engineering Vol. 2 No. 2 (2023): August : Journal of Technology Informatics and Engineering Vol 2 No 1 (2023): April : Journal of Technology Informatics and Engineering Vol. 2 No. 1 (2023): April : Journal of Technology Informatics and Engineering Vol 1 No 3 (2022): Desember: Journal of Technology Informatics and Engineering Vol 1 No 3 (2022): December: Journal of Technology Informatics and Engineering Vol. 1 No. 3 (2022): December: Journal of Technology Informatics and Engineering Vol 1 No 2 (2022): August: Journal of Technology Informatics and Engineering Vol 1 No 2 (2022): Agustus: Journal of Technology Informatics and Engineering Vol. 1 No. 2 (2022): August: Journal of Technology Informatics and Engineering Vol. 1 No. 1 (2022): April: Journal of Technology Informatics and Engineering Vol 1 No 1 (2022): April: Journal of Technology Informatics and Engineering More Issue