The development of selective kinase inhibitors remains a key objective in cancer drug discovery, where predictive computational models can significantly accelerate the identification of leads. In this study, we investigate the fine-tuning strategies of the transformer-based ChemBERTa model for quantitative structure–activity relationship (QSAR) modeling of AXL receptor tyrosine kinase inhibitors, an important therapeutic target implicated in tumor progression and metastasis. A dataset of AXL inhibitors was curated from the ChEMBL database. Three fine-tuning configurations, namely baseline, full fine-tune, and aggressive, were implemented to examine the influence of learning rate, weight decay, and the number of frozen transformer layers on model performance. Models were evaluated using accuracy, precision, recall, F1-score, and calibration metrics. Results showed that both the full fine-tune and aggressive configurations outperformed the baseline model, achieving higher precision and F1-scores while maintaining robust recall. The aggressive configuration achieved the most balanced performance, with improved calibration and the lowest expected calibration error, indicating reliable probabilistic predictions. Overall, this study highlights that controlled fine-tuning of ChemBERTa significantly enhances predictive performance and confidence estimation in QSAR modeling, offering valuable insights for optimizing transformer-based chemical language models in kinase-targeted drug discovery.
Copyrights © 2025