cover
Contact Name
Johan Reimon Batmetan
Contact Email
garuda@apji.org
Phone
+6285885852706
Journal Mail Official
danang@stekom.ac.id
Editorial Address
Jl. Majapahit No.304, Pedurungan Kidul, Kec. Pedurungan, Semarang, Provinsi Jawa Tengah, 52361
Location
Kota semarang,
Jawa tengah
INDONESIA
Journal of Technology Informatics and Engineering
ISSN : 29619068     EISSN : 29618215     DOI : 10.51903
Core Subject : Science,
Power Engineering Telecommunication Engineering Computer Engineering Control and Computer Systems Electronics Information technology Informatics Data and Software engineering Biomedical Engineering
Articles 172 Documents
A Constrained, Data-Driven Budgeting Framework Integrating Macro Demand Forecasting and Marketing Response Modeling Lu, Yifei; Zhou, Hailin; Zhang, Yitian
Journal of Technology Informatics and Engineering Vol. 4 No. 3 (2025): DECEMBER | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i3.466

Abstract

Budgeting and financial planning & analysis (FP&A) increasingly require combining macroeconomic signals, channel-level marketing effectiveness, and hard accounting constraints into a single, auditable decision process. This paper proposes and empirically evaluates an end-to-end framework that (i) forecasts category-level demand from public macro data, (ii) learns diminishing-returns marketing response curves, and (iii) solves a constrained portfolio optimization problem to allocate marketing spend while satisfying SG&A and cash-flow guardrails consistent with real public-company statements. Using quarterly Personal Consumption Expenditures (PCE) components from FRED (durable goods, nondurable goods, and services) as a proxy for market demand, we compare seasonal naïve, SARIMAX, gradient boosting, and a multivariate VAR model in a rolling backtest (2018Q1-2025Q3). In parallel, we estimate marketing response from the Advertising dataset (TV, radio, and newspaper spend) via linear models, gradient boosting, and a Hill-function saturation model. We then calibrate financial constraints-gross margin, SG&A ratio, and operating cash-flow coverage-directly from Apple Inc.’s FY2025 Form 10-K filed with the SEC, and integrate all components into a Monte Carlo-evaluated budgeting optimizer. Results show that multivariate models improve total-demand accuracy (≈2.85% MAPE) and that nonlinear response curves indicate strong diminishing returns and negligible incremental value for newspaper spend. The constrained optimizer produces stable allocations that trade off expected operating profit and downside risk, and it highlights a practical insight: budgets that exactly meet a ratio-based cap under point forecasts may violate constraints under realistic demand uncertainty. The proposed workflow is fully reproducible from public data sources and provides a template for transparent, constraint-aware budgeting.
Efficient Temporal Segmentation And Classification Of Short-Form Video Content Using Lightweight CNN-LSTM Architecture Tan, Ben Liu; Liem, Chstina Angel; Amen, Mohamed
Journal of Technology Informatics and Engineering Vol. 5 No. 1 (2026): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v5i1.441

Abstract

The exponential rise of short-form video platforms such as TikTok, Instagram Reels, and YouTube Shorts has transformed digital content consumption patterns, creating both opportunities and challenges in media analysis. One critical need is the efficient segmentation and classification of temporal segments within these videos to enable applications in content moderation, targeted advertising, and audience behavior research. This study proposes a lightweight deep learning architecture that integrates Convolutional Neural Networks (CNN) for visual feature extraction and Long Short-Term Memory (LSTM) networks for temporal sequence modeling. The proposed CNN-LSTM framework is optimized for computational efficiency while maintaining high classification accuracy, making it suitable for deployment in resource-constrained environments. Experimental evaluations on a curated short-form video dataset show that the model achieves competitive performance compared with larger architectures, with significant reductions in memory usage and inference time. Furthermore, the temporal segmentation module effectively isolates meaningful visual-audio segments, enabling more precise classification outcomes. The results highlight the potential of lightweight architectures to address the scalability demands of modern video analysis systems without sacrificing accuracy. This research contributes to the growing discourse on efficient multimedia processing by bridging the gap between high-performance models and practical, real-time applications in the evolving short-form video ecosystem.
Privacy-Robust Incrementality Estimation in Cookieless Settings via Uplift Modeling: Reproducible Evidence from the Hillstrom E-Mail Experiment Bai, Jingwen; Wang, Haozhe; Wu, Qiyou; Zhang, Boning
Journal of Technology Informatics and Engineering Vol. 5 No. 1 (2026): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v5i1.468

Abstract

Measuring advertising incrementality in the absence of user-level identifiers is increasingly constrained by platform policies and privacy regulations. In cookieless environments, practitioners often observe only aggregated or weak signals (e.g., cohort-level conversion counts) and must still estimate the causal lift of an intervention while quantifying uncertainty. This paper studies cookieless incrementality evaluation through the lens of uplift and individual treatment effect (ITE) modeling under explicit privacy constraints. We conduct full experimental evaluations on the MineThatData (Hillstrom) E-Mail Analytics Challenge dataset (64,000 customers in a randomized controlled experiment with three arms). We cast the task as a binary treatment problem—sending any e-mail campaign versus sending none—and compare six ITE estimators (S-, T-, X-, R-, and doubly robust learners, plus transformed-outcome regression) against cohort-only estimators that emulate cookieless measurement. The cohort estimator uses only aggregated counts and a Bayesian beta–binomial model to shrink noisy rates, and we evaluate robustness under k-anonymity thresholds and Laplace-noised differentially private aggregates. Across held-out test data, the best ID-level model (T-learner with logistic regression) achieves a Qini coefficient of 6.675 and improves the estimated policy conversion rate when targeting the top 20% of customers by predicted uplift. Cohort-only estimation retains a weaker and more variable signal; its point estimate is sensitive to privacy constraints but yields valid uncertainty intervals with 0.892 empirical coverage for a 95% interval in cohort-level validation. The results demonstrate that (i) causal lift is estimable without identifiers when randomized experimentation is available, (ii) doubly robust estimators provide strong performance and fast scoring, and (iii) privacy-preserving aggregation introduces an accuracy–privacy trade-off that can be quantified and monitored using bootstrap and Bayesian uncertainty.
A Comparative Study on Self-Organization in Wireless Sensor Networks Simon, Michael; Din, Salwa M.; Chib, Raja Jamal
Journal of Technology Informatics and Engineering Vol. 5 No. 1 (2026): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v5i1.483

Abstract

Wireless sensor networks (WSNs) have emerged as a critical infrastructure for distributed sensing platforms in recent years. Their effective implementation requires self-organizing features that can adapt to rapidly changing ecological conditions. We have noticed in the comparative study that despite extensive research on individual self-organizing mechanisms, e.g., clustering, routing, and topology management. We believe there exists a significant analytical gap in systematically comparing these approaches across key performance metrics. Our study addresses this gap by conducting a comprehensive comparative analysis of four primary self-organization or autonomious mechanisms: clustering-based organization, dynamic routing protocols, topology adjustment strategies, and coverage reinforcement methods. In our work, using a simulation-based methodology with the NS-3 network simulator, we thoroughly tested these frameworks across networks with 50 to 500 nodes under varying traffic loads and mobility patterns. We assessed the performance using three key KPIs (key performance indicators). Reliability is measured by packet delivery ratio, scalability by convergence time, and energy efficiency by network lifetime parameters. Our results demonstrate that clustering approaches achieve 23% better energy efficiency in static deployments, whereas distributed routing protocols provide 34% better scalability in dynamic conditions. We also observed that topology adjustment mechanisms improve reliability by 18% under high node failure rates. These findings provide clear, evidence-based guidance for selecting the right self-organization technique for specific deployment scenarios and application requirements. We recommend that future research investigate hybrid mechanisms that combine multiple approaches and explore integrating machine learning to support adaptive strategy selection under heterogeneous network conditions.
LLM-Driven CI Failure Diagnosis and Automated Repair: From GitHub Actions Logs to Patch Recommendation Zhang, Hanqi
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.484

Abstract

Continuous Integration (CI) pipelines surface regressions early but also produce long, noisy logs. Diagnosing a failing GitHub Actions run and drafting a safe repair patch can be time-consuming, especially when dealing with dependency drift or configuration errors. We study a practical CI-repair pipeline decomposed into three measurable tasks: (1) coarse failure-type classification, (2) retrieval-based repair (log similarity  reuse the closest historical fix diff), and (3) constrained patch generation that emits a unified diff via template+slot filling. The pipeline follows the schema and task framing of JetBrains-Research’s lca-ci-builds-repair dataset from Long Code Arena (212 samples). Because runtime restrictions in our environment prevent downloading the original Hugging Face-hosted parquet files, all quantitative results in this paper are evaluated on a locally generated proxy dataset, CI-Repair-Sim212, which matches the benchmark’s field schema and evaluation protocol. On CI-Repair-Sim212, failure-type classification reaches a ceiling (Macro-F1=1.000), whereas repair-pattern prediction remains harder (Macro-F1=0.796 with log+workflow). For patch recommendation, retrieval achieves Token-F1@1=0.898 and Pattern@1=0.783 when combining logs with workflow context, and constrained generation further improves diff similarity to Token-F1=0.923. Across tasks, adding workflow YAML context yields consistent gains, motivating hybrid CI assistants that prioritize retrieval when near-duplicate failures exist and fall back to constrained generation when close matches are absent.
Uncertainty-Aware Late Fusion for 3D Perception (Confidence Calibration + Fusion Rule Learning) Xin, Qi
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.485

Abstract

Late fusion remains attractive for multi-sensor 3D perception because it preserves independent sensor pipelines, enables modular upgrades, and supports rigorous ablation experiments. This paper presents an uncertainty-aware late-fusion framework that combines per-modality confidence calibration with learning a fusion rule. We conduct full experimental evaluations on a PandaSet-style LiDAR+camera subset comprising 10 multi-frame sequences and 2,200 synchronized frames, with 49,549 annotated 3D objects across the Car, Pedestrian, and Cyclist classes. The framework calibrates LiDAR and camera confidence using temperature scaling and isotonic regression, estimates uncertainty-conditioned localization variance, and fuses associated candidates using multiple rules (max, mean, product/odds, and Dempster-Shafer) as well as a learned fusion rule (logistic regression trained on association features). On the test split, isotonic calibration reduces LiDAR Expected Calibration Error from 0.260 to 0.006 and Negative Log-Likelihood from 0.410 to 0.110, and it similarly improves camera confidence quality. Although mean Average Precision (mAP) remains similar to a LiDAR-only baseline in this controlled setting, calibrated late fusion provides substantially better decision reliability at fixed confidence thresholds and maintains conservative high-precision behavior under camera dropout. These results support an engineering conclusion: confidence calibration is the highest-leverage upgrade for late fusion in safety-critical stacks, and fusion rule choice can be tuned to downstream risk preferences.
LiDAR–Camera Object-Level Fusion for Multi-Target Tracking Using JPDA and EKF: A Reproducible Empirical Study on a PandaSet-Parameterised Five-Sequence Dataset Xin, Qi
Journal of Technology Informatics and Engineering Vol. 5 No. 1 (2026): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v5i1.486

Abstract

Multi-target tracking in cluttered scenes is essential for automated driving, where downstream planning requires stable object identities and accurate state estimates. This paper provides a fully reproducible empirical and sensitivity study of a classical object-level LiDAR–camera fusion tracker that combines Joint Probabilistic Data Association (JPDA) with an Extended Kalman Filter (EKF) under a constant-velocity state model. Because the MathWorks PandaSet subset is distributed as a ZIP archive that cannot be ingested into our execution environment, we generate a PandaSet-parameterised five-sequence synthetic dataset with explicitly specified sampling rates, measurement noise, detection probabilities, and Poisson clutter, and report end-to-end results with fixed random seeds. Using sequential fusion (LiDAR JPDA–EKF update followed by a camera bearing update), we obtain a mean MOTA of 0.880 and a mean position RMSE of 0.361 m, compared with LiDAR-only JPDA–EKF MOTA of 0.883 and RMSE of 0.395 m. Fusion, therefore, improves localization accuracy while sometimes reducing MOTA due to additional association ambiguity introduced by camera clutter; this trade-off is discussed in terms of downstream use cases that prioritize state accuracy. Sensitivity sweeps show that probabilistic association degrades more gracefully than hard nearest-neighbor assignment as clutter increases and delineate regimes where camera information is beneficial. A camera-only bearing tracker is included as a diagnostic baseline (not as a competitive approach); as expected, given the observability limits, it is not reliable under the studied clutter conditions. The dataset specification, parameters, and reporting artefacts form a reproducible template for diagnosing JPDA/EKF tracking and object-level fusion.
IoT-Driven in the Banking Application Platforms Using a Real-Time SQL Injection Mitigative Measures Ngozi, Amaka Eugenia; Kalu, Oji Victor; Ikechukwu, Ezea Jonathan; Lilian, Okpalla Chidimma; Ezeh, Gloria Ngozi
Journal of Technology Informatics and Engineering Vol. 5 No. 1 (2026): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v5i1.481

Abstract

The Internet of Things (IoT) integration into banking systems has revolutionized banking operations while also posing threats, including SQL injection (SQLi) attacks. Thus, the defenses of the existing system, such as access control mechanisms, firewalls, and signature-based Intrusion Detection Systems (IDSs), failed to detect both novel and obfuscated SQLi attempts. Hence, this research developed a machine-learning-based detection framework capable of identifying SQLi attacks on IoT-driven banking platforms. The model was trained on a Random Forest (RF) classifier and evaluated in a Python environment. Streamlit was used to deploy the model for real-time prediction, while performance visualization was through the Power BI dashboard. However, the results from the model’s evaluation were highly impressive, with 99.53% accuracy, 99.96% precision, and 98.78% recall. This demonstrated the model's ability to detect both known and unknown SQL patterns. However, the research concluded that combining behavioural analytics with a machine-learning approach is highly effective for securing IoT banking platforms and recommended periodic retraining using a deep-learning approach.
From Hand-Drawn Sketches to Interactive Web Prototypes: A Reproducible Vision-Language Approach with Structural and Visual Consistency Evaluation Chen, Yushan; Li, Maoxi
Journal of Technology Informatics and Engineering Vol. 4 No. 2 (2025): AUGUST | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i2.490

Abstract

Service design workflows often begin with low-fidelity sketches that must be quickly translated into interactive prototypes. This paper studies the Sketch-to-Web problem: generating HTML/CSS prototypes from hand-drawn UI sketches and evaluating fidelity with both structural and visual metrics. Because the original Sketch2Code benchmark is distributed primarily as compressed artifacts that are not executable in our restricted runtime, we construct Sketch2Code-Synth, a size-matched and protocol-matched instantiation containing 731 hand-drawn-style sketches paired with 484 webpage prototypes while preserving the same sketch-to-HTML task interface. We implement a lightweight constrained sketch-to-HTML baseline (ProtoVLM) that combines HOG-based template recognition with template-conditioned HTML/CSS instantiation. We compare ProtoVLM against three baselines (kNN retrieval, heuristic computer vision layout extraction, and majority-template generation) and an oracle upper bound. Evaluation uses (i) DOM tree edit distance computed on a containment-induced layout tree, (ii) element-level IoU with Hungarian matching, and (iii) wireframe SSIM on 200×150 rasterized layouts. On the held-out test split (97 pages, 147 sketches), ProtoVLM achieves a mean tree edit distance of 2.224, mean element IoU of 0.755, and mean SSIM of 0.474. Relative to kNN retrieval, the main gain is in localization stability (IoU 0.755 vs. 0.697), while structural distance is similar (TED 2.224 vs. 2.422). Because the benchmark uses a controlled template library and wireframe renderings, the results should be interpreted as evidence on constrained layout recognition and prototype normalization rather than unconstrained real-world sketch understanding. In this setting, SSIM measures layout resemblance only, not interface realism or usability.  
Automatic Detection and Explanation of Dark Patterns from Interface Microcopy: Empirical Comparison of BERT-Style Encoders, RoBERTa-Style Encoders, and LLM-Style Decoders on the ec-darkpattern Dataset Xu, Haosen; Chen, Yushan; Med, Aron
Journal of Technology Informatics and Engineering Vol. 4 No. 3 (2025): DECEMBER | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i3.491

Abstract

Dark patterns (also called deceptive design patterns) are interface choices that steer or pressure users into unintended actions such as rushed purchases, unnecessary disclosures, or hard-to-cancel subscriptions. In e-commerce, many dark patterns are expressed directly in microcopy (e.g., button labels, banners, and inline messages), which makes text-only detection attractive for scalable auditing. This paper presents a fully reproducible experimental study on ec-darkpattern, a text-based dataset of e-commerce interface strings with balanced binary labels (1,178 dark pattern vs. 1,178 non-dark pattern) and seven dark pattern categories. We compare (i) a rule-based lexicon baseline, (ii) hashed n-gram linear models, (iii) a lightweight BERT-style bidirectional transformer encoder with word tokenization, (iv) a lightweight RoBERTa-style bidirectional transformer encoder with character tokenization, and (v) an LLM-style causal decoder trained as a classifier on the same inputs. On a fixed 80/10/10 split with seed 42, the best-performing model is a hashing + linear SVM baseline (F1=0.9437, ROC-AUC=0.9810), while the BERT-style encoder achieves F1=0.9038 (ROC-AUC=0.9695), the RoBERTa-style encoder achieves F1=0.8907 (ROC-AUC=0.9573), and the LLM-style decoder achieves F1=0.7884 (ROC-AUC=0.8808). These results should be interpreted as a controlled comparison under low-resource, no-pretraining conditions on a single fixed split, rather than as a general ranking of encoder-style versus decoder-style transformers. To support explainability, we generate token-level attributions using gradient-based saliency, summarize them as key phrases, and estimate explanation consistency via top-k token overlap on an exploratory 20-instance sample (mean Jaccard up to 0.7482 between the two character-based transformers). Finally, we curate an error-case library that links misclassifications to their most influential phrases. Within this short-microcopy setting, the findings show that lexical baselines are especially strong, while transformer directionality and tokenization change both accuracy and the stability of highlighted cues.

Filter by Year

2022 2026


Filter By Issues
All Issue Vol. 5 No. 1 (2026): APRIL | JTIE : Journal of Technology Informatics and Engineering Vol. 4 No. 3 (2025): DECEMBER | JTIE : Journal of Technology Informatics and Engineering Vol. 4 No. 2 (2025): AUGUST | JTIE : Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering Vol. 3 No. 3 (2024): December (Special Issue: Big Data Analytics) | JTIE: Journal of Technology Info Vol. 3 No. 2 (2024): Agustus : Journal of Technology Informatics and Engineering Vol 3 No 2 (2024): Agustus : Journal of Technology Informatics and Engineering Vol 3 No 1 (2024): April : Journal of Technology Informatics and Engineering Vol. 3 No. 1 (2024): April : Journal of Technology Informatics and Engineering Vol 2 No 3 (2023): December : Journal of Technology Informatics and Engineering Vol. 2 No. 3 (2023): December : Journal of Technology Informatics and Engineering Vol 2 No 2 (2023): August : Journal of Technology Informatics and Engineering Vol. 2 No. 2 (2023): August : Journal of Technology Informatics and Engineering Vol 2 No 1 (2023): April : Journal of Technology Informatics and Engineering Vol. 2 No. 1 (2023): April : Journal of Technology Informatics and Engineering Vol. 1 No. 3 (2022): December: Journal of Technology Informatics and Engineering Vol 1 No 3 (2022): Desember: Journal of Technology Informatics and Engineering Vol 1 No 3 (2022): December: Journal of Technology Informatics and Engineering Vol 1 No 2 (2022): August: Journal of Technology Informatics and Engineering Vol 1 No 2 (2022): Agustus: Journal of Technology Informatics and Engineering Vol. 1 No. 2 (2022): August: Journal of Technology Informatics and Engineering Vol. 1 No. 1 (2022): April: Journal of Technology Informatics and Engineering Vol 1 No 1 (2022): April: Journal of Technology Informatics and Engineering More Issue