This study aims to address the challenge of detecting hate speech in text data by comparing two experimental CNN-RNN models. The primary issue is achieving a balance between precision and recall in hate speech detection while preventing overfitting and ensuring good generalization. Two different approaches were applied: the first model used standard training techniques, while the second model incorporated L2 regularization and early stopping. The research involved using Keras Tokenizer for text tokenization, layering with CNN and LSTM for feature extraction and temporal context capturing, and applying dropout to prevent overfitting. L2 regularization and early stopping were added to the second model to enhance generalization. The findings reveal that the first model, although exhibiting some overfitting, attained a higher overall accuracy of 78% and more balanced F1-scores for both the "Not Hate Speech" and "Hate Speech" categories. The second model, although achieving higher precision for hate speech (0.81), had lower recall (0.58), resulting in an overall accuracy of 75%. This suggests that regularization and early stopping need careful tuning to avoid reducing sensitivity to hate speech detection.