Multi-target tracking in cluttered scenes is essential for automated driving, where downstream planning requires stable object identities and accurate state estimates. This paper provides a fully reproducible empirical and sensitivity study of a classical object-level LiDAR–camera fusion tracker that combines Joint Probabilistic Data Association (JPDA) with an Extended Kalman Filter (EKF) under a constant-velocity state model. Because the MathWorks PandaSet subset is distributed as a ZIP archive that cannot be ingested into our execution environment, we generate a PandaSet-parameterised five-sequence synthetic dataset with explicitly specified sampling rates, measurement noise, detection probabilities, and Poisson clutter, and report end-to-end results with fixed random seeds. Using sequential fusion (LiDAR JPDA–EKF update followed by a camera bearing update), we obtain a mean MOTA of 0.880 and a mean position RMSE of 0.361 m, compared with LiDAR-only JPDA–EKF MOTA of 0.883 and RMSE of 0.395 m. Fusion, therefore, improves localization accuracy while sometimes reducing MOTA due to additional association ambiguity introduced by camera clutter; this trade-off is discussed in terms of downstream use cases that prioritize state accuracy. Sensitivity sweeps show that probabilistic association degrades more gracefully than hard nearest-neighbor assignment as clutter increases and delineate regimes where camera information is beneficial. A camera-only bearing tracker is included as a diagnostic baseline (not as a competitive approach); as expected, given the observability limits, it is not reliable under the studied clutter conditions. The dataset specification, parameters, and reporting artefacts form a reproducible template for diagnosing JPDA/EKF tracking and object-level fusion.
Copyrights © 2026