Xin, Qi
Unknown Affiliation

Published : 3 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 3 Documents
Search

Hybrid Cloud Architecture for Efficient and Cost-Effective Large Language Model Deployment Xin, Qi
Journal of Information System and Informatics Vol 7 No 3 (2025): September
Publisher : Universitas Bina Darma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51519/journalisi.v7i3.1170

Abstract

Large Language Models (LLMs) have achieved remarkable success across natural language tasks, but their enormous computational requirements pose challenges for practical deployment. This paper proposes a hybrid cloud–edge architecture to deploy LLMs in a cost-effective and efficient manner. The proposed system employs a lightweight on-premise LLM to handle the bulk of user requests, and dynamically offloads complex queries to a powerful cloud-hosted LLM only when necessary. We implement a confidence-based routing mechanism to decide when to invoke the cloud model. Experiments on a question-answering use case demonstrate that our hybrid approach can match the accuracy of a state-of-the-art LLM while reducing cloud API usage by over 60%, resulting in significant cost savings and a ~40% reduction in average latency. We also discuss how the hybrid strategy enhances data privacy by keeping sensitive queries on-premise. These results highlight a promising direction for organizations to leverage advanced LLM capabilities without prohibitive expense or risk, by intelligently combining local and cloud resources.
Uncertainty-Aware Late Fusion for 3D Perception (Confidence Calibration + Fusion Rule Learning) Xin, Qi
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.485

Abstract

Late fusion remains attractive for multi-sensor 3D perception because it preserves independent sensor pipelines, enables modular upgrades, and supports rigorous ablation experiments. This paper presents an uncertainty-aware late-fusion framework that combines per-modality confidence calibration with learning a fusion rule. We conduct full experimental evaluations on a PandaSet-style LiDAR+camera subset comprising 10 multi-frame sequences and 2,200 synchronized frames, with 49,549 annotated 3D objects across the Car, Pedestrian, and Cyclist classes. The framework calibrates LiDAR and camera confidence using temperature scaling and isotonic regression, estimates uncertainty-conditioned localization variance, and fuses associated candidates using multiple rules (max, mean, product/odds, and Dempster-Shafer) as well as a learned fusion rule (logistic regression trained on association features). On the test split, isotonic calibration reduces LiDAR Expected Calibration Error from 0.260 to 0.006 and Negative Log-Likelihood from 0.410 to 0.110, and it similarly improves camera confidence quality. Although mean Average Precision (mAP) remains similar to a LiDAR-only baseline in this controlled setting, calibrated late fusion provides substantially better decision reliability at fixed confidence thresholds and maintains conservative high-precision behavior under camera dropout. These results support an engineering conclusion: confidence calibration is the highest-leverage upgrade for late fusion in safety-critical stacks, and fusion rule choice can be tuned to downstream risk preferences.
LiDAR–Camera Object-Level Fusion for Multi-Target Tracking Using JPDA and EKF: A Reproducible Empirical Study on a PandaSet-Parameterised Five-Sequence Dataset Xin, Qi
Journal of Technology Informatics and Engineering Vol. 5 No. 1 (2026): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v5i1.486

Abstract

Multi-target tracking in cluttered scenes is essential for automated driving, where downstream planning requires stable object identities and accurate state estimates. This paper provides a fully reproducible empirical and sensitivity study of a classical object-level LiDAR–camera fusion tracker that combines Joint Probabilistic Data Association (JPDA) with an Extended Kalman Filter (EKF) under a constant-velocity state model. Because the MathWorks PandaSet subset is distributed as a ZIP archive that cannot be ingested into our execution environment, we generate a PandaSet-parameterised five-sequence synthetic dataset with explicitly specified sampling rates, measurement noise, detection probabilities, and Poisson clutter, and report end-to-end results with fixed random seeds. Using sequential fusion (LiDAR JPDA–EKF update followed by a camera bearing update), we obtain a mean MOTA of 0.880 and a mean position RMSE of 0.361 m, compared with LiDAR-only JPDA–EKF MOTA of 0.883 and RMSE of 0.395 m. Fusion, therefore, improves localization accuracy while sometimes reducing MOTA due to additional association ambiguity introduced by camera clutter; this trade-off is discussed in terms of downstream use cases that prioritize state accuracy. Sensitivity sweeps show that probabilistic association degrades more gracefully than hard nearest-neighbor assignment as clutter increases and delineate regimes where camera information is beneficial. A camera-only bearing tracker is included as a diagnostic baseline (not as a competitive approach); as expected, given the observability limits, it is not reliable under the studied clutter conditions. The dataset specification, parameters, and reporting artefacts form a reproducible template for diagnosing JPDA/EKF tracking and object-level fusion.