Clean-label poisoning attacks pose a stealthy and potent threat to deep neural networks (DNNs), particularly when models rely on publicly available or outsourced training data. Among these attacks, the Bullseye Polytope method is highly transferable and can evade state-of-the-art defenses such as deep k-NN. To counter this, we propose Poison Image Traceback via Feature Clustering (PIFC-CLD), a novel forensic approach that leverages Euclidean norm distances to detect and trace clean-label attacks in DNNs. PIFC exploits the geometric consistency of feature representations to identify poisoned samples responsible for model misclassifications. Unlike traditional majority-vote-based defenses, PIFC-CLD performs clustering in feature space and detects poisoned samples based on their proximity to misclassified targets using Euclidean distance. We evaluate our approach under Bullseye Polytope attack scenarios using the CIFAR-10 dataset and WideResNet architectures. PIFC-CLD achieves 99% precision, 95% recall, and a 96% F1 score at k = 25 and ε = 0.2, demonstrating robust performance against Bullseye Polytope attacks. Furthermore, our algorithm exhibits strong resilience to parameter variations while minimizing false positives and preserving model integrity. This work bridges the gap between digital forensics and adversarial machine learning, offering a lightweight, model-agnostic, and interpretable solution for secure model training in adversarial environments.