As the volume of scientific publications increases, the need for automated approaches to evaluate and analyze abstracts becomes increasingly important. This research not only aims to predict the abstract rating of scientific publications using machine learning algorithms, but also offers a unique approach by integrating regression and classification analysis to evaluate the relevance of abstracts more comprehensively. Four main models, namely XGBoost Regressor, Random Forest Regressor, Support Vector Regressor (SVR), and K-Nearest Neighbors Regressor (KNN), are evaluated for this task. The dataset is processed through preprocessing stages which include removing duplications, text representation using TF-IDF, handling data imbalances with Synthetic Minority Oversampling Technique (SMOTE), and dimension reduction using Truncated Singular Value Decomposition (SVD). The research results show that SVR provides performance the best with the lowest Mean Absolute Error (MAE) value of 0.4980, Mean Squared Error (MSE) of 0.5237, and the highest R² of 0.7321. XGBoost and Random Forest show competitive performance with advantages in computational efficiency and prediction stability respectively, while KNN provides varying results depending on the data distribution. Dimensionality reduction using Truncated SVD successfully preserves more than 70% of the initial variance, enabling higher computational efficiency without losing important information. This research makes a significant contribution in supporting machine learning-based decision making, especially in the analysis of abstracts of scientific publications. This approach can be further developed through exploration of ensemble or hybrid models, as well as testing on larger datasets to improve generalization and accuracy.