This study examines a hepatitis patient dataset using eleven machine learning (ML) models, including LR, SVM, KNN, DT, RF, XGBoost, LightGBM, GBDT, Cat- Boost, AdaBoost, and Stacking. The dataset is subjected to various analyses, includ- ing correlation analysis, age distribution exploration, class imbalance resolution, and feature importance evaluation using eight methods: Chi-square, DT, RF, XGBoost, LightGBM, GBDT, CatBoost, and AdaBoost. The results of this study indicate that the implementation of the SMOTE method and feature importance analysis improves the performance of ML models. Among the eleven models used, the LR model achieved the highest accuracy, reaching 93.75% before applying SMOTE and increasing to 100% after its implementation. Furthermore, the SMOTE method suc- cessfully addressed the issue of class imbalance in the dataset, as evidenced by the improvement in accuracy of the RF model after applying SMOTE. Overall, this study demonstrates that the use of the SMOTE method and feature importance analysis, particularly with the Chi-square method, plays a crucial role in improving the performance of ML models. SMOTE helps address class imbalance issues, while feature importance analysis assists in selecting relevant features. By combining both approaches, ML models achieve higher and better accuracy in classifying samples from the minority class