Traditional authentication methods such as PINs and passwords remain vulnerable to theft and hacking, demanding more secure alternatives. Biometric approaches address these weaknesses, yet unimodal systems like fingerprints or facial recognition are still prone to spoofing and environmental disturbances. This study aims to enhance biometric reliability through a multimodal framework integrating electrocardiogram (ECG) signals and fingerprint images. Fingerprint features were extracted using three deep convolutional networks—VGG16, ResNet50, and DenseNet121—while ECG signals were segmented around the first R-peak to produce feature vectors of varying dimensions. Both modalities were fused at the feature level using early fusion and classified with four deep learning algorithms: Multilayer Perceptron (MLP), Long Short-Term Memory (LSTM), Graph Convolutional Network (GCN), and Graph Attention Network (GAT). Experimental results demonstrated that the combination of VGG16 + LSTM and ResNet50 + LSTM achieved the highest identification accuracy of 98.75 %, while DenseNet121 + MLP yielded comparable performance. MLP and LSTM consistently outperformed GCN and GAT, confirming the suitability of sequential and feed-forward models for fused feature embeddings. By employing R-peak-based ECG segmentation and CNN-driven fingerprint features, the proposed system significantly improves classification stability and robustness. This multimodal biometric design strengthens protection against spoofing and impersonation, providing a scalable and secure authentication solution for high-security applications such as digital payments, healthcare, and IoT devices.