A conjugate gradient algorithm for the non-convex minimization problem and its convergence properties

D. Akdag, E. Altiparmak, I. Karahan*, L. O. Jolaoso

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This study introduces a new and efficient modification of the conjugate gradient algorithm for solving non-convex unconstrained optimization problems. The proposed method ensures the sufficient descent property regardless of the line search technique and is proven to be globally convergent under both Wolfe and Armijo conditions. Its numerical performance is assessed through a set of large-scale benchmark problems. The findings indicate that the proposed algorithm exhibits competitive efficiency and reliability compared to existing conjugate gradient variants. To demonstrate applicability further, the algorithm is tested on two scenarios. The first is an image restoration problem, and the second is the motion control of a 2-DOF planar robotic manipulator, where inverse kinematics is solved iteratively for trajectory tracking. The algorithm demonstrates high tracking precision and stable convergence, highlighting its theoretical soundness and potential for various optimization applications.

Original languageEnglish
JournalEngineering Optimization
DOIs
Publication statusPublished - 2025

Keywords

  • Large scale unconstrained optimization
  • conjugate gradient algorithm
  • global convergence
  • image restoration
  • performance profile

Fingerprint

Dive into the research topics of 'A conjugate gradient algorithm for the non-convex minimization problem and its convergence properties'. Together they form a unique fingerprint.

Cite this