Recognizing isolated sign language gestures is difficult due to differences in body proportions and missing pose landmarks. Many current methods struggle to work well across different signers. To solve this, we propose reference-based normalization, which reduces body shape differences by separately normalizing body parts such as the full body, arms, face, and hands. We tested this method using LSTM and GRU models on two datasets: a custom American Sign Language (ASL) dataset with one amateur signer, and the public WLASL dataset with various signers. On the custom dataset, the highest accuracy (97.75%) was achieved using LSTM with normalization applied only to the full body and hands, since the signer was consistent. For the WLASL dataset, adding normalization for the arms and face improved accuracy by 3.10% for LSTM and 0.77% for GRU. The GRU model reached the best WLASL result (74.03%) with fewer parameters than other advanced models. These findings show that reference-based normalization improves sign recognition performance and has potential for real-world use, especially in recognizing signs in continuous sequences.