Conjugate Gradient (CG) methods are widely used for solving unconstrained optimization problems due to their efficiency and low memory requirements. However, standard CG methods may not always guarantee sufficient descent condition, which can impact their robustness and convergence behavior. Additionally, their effectiveness in training artificial neural networks (ANNs) remains an area of interest. In response, this paper presents a three-term conjugate gradient (CG) method for unconstrained optimization problems. The new parameter is formulated so that the search direction satisfies the sufficient descent condition. The global convergence result of the new algorithm is discussed under suitable assumptions. To evaluate the performance of the new method we considered some standard test problems for unconstrained optimization and applied the proposed method to train different ANNs on some benchmark data sets contained in the NN toolbox. The experimental results show that performance is encouraging for both unconstrained minimization test problems and in training neural networks.
Copyrights © 2025