Discovering the optimal model in today's popularity of various machine learning applications remains an essential challenge. Besides data dependency, the performance of classification models is also affected by deciding on suitable algorithm with optimal hyperparameter settings. This study conducted a hyperparameter optimization process and compared the accuracy results by applying various classification models to the observation dataset. This study obtains data from the Sloan Digital Sky Survey Data Release 18 (SDSS-DR18) and Sloan Extension for Galactic Understanding and Exploration (SEGUE-IV). The SDSS-DR18 and SEGUE-IV provide observational data of space objects, such as stellar spectra with corresponding positions and magnitudes of galaxies or stars. The SDSS-DR18 dataset contains magnitude and redshift data of celestial objects with target features of stars, Quasi Stellar Objects (QSOs), and galaxies. The SEGUE-IV dataset contains equivalent-width parameters, inline indices, and other features to the radial velocity of the corresponding star spectrum. This study utilized several machine learning models, such as k-Nearest Neighbor (KNN), Gaussian-Naive Bayes, eXtreme Gradient Boosting (XGBoost), Random Forest, Support Vector Machine (SVM), and Multi-Layer Perceptron (MLP). This study utilized Bayesian, Grid, and Random-based approaches to find the optimal hyperparameters to maximize the performance of the classification model. This study proved that some classification models have improved accuracy scores through the Bayesian-based hyperparameter optimization settings. This study discovers the XGBoost model shows the highest classification results after hyperparameters optimization compared to other models for both datasets with an average accuracy of 99.10% and 95.11%, respectively.