Machine learning techniques are widely used in various fields and data is needed to train models. However, the distribution of classes in most real-world datasets turns out to be not always balanced, and can be very imbalanced. If the data is imbalanced, the performance of the classifier is highly dependent on the majority class, causing problems in determining performance. One technique that can be applied to balance the data is the Synthetic Minority Oversampling Technique (SMOTE). SMOTE is applied to credit scoring using the German Credit Data (GCD) dataset, and then classified using four classification methods, namely: random forest, K-Nearest Neighbor (KNN), Support Vector Machine (SVM), and Multilayer Perceptron (MLP). The performance measure of implementing SMOTE in each classifier method is measured using: recall, precision, F1-Score, and AUC. Accuracy values are also measured to see if the accuracy is suitable for calculating performance on imbalanced datasets. Based on performance measures: recall, precision, F1-Score, and AUC, then applying SMOTE to the dataset and then classifying it using four methods shows an increase in performance. The highest performance measure: recall = 82.00% with the random forest method, precision = 75.35 with the MLP method, F1-Score = 76.93% with the MLP method, and AUC = 0.832 with the random forest method. The accuracy value after SMOTE slightly decreased in the random forest, KNN, and SVM methods, while with MLP the accuracy value increased slightly. The contribution of this research is to show the need for imbalanced data handling to improve the performance of classifier algorithms, especially for credit rating datasets.