A New Method for Unconstrained Optimization Problem


  •  Zhiguang Zhang    

Abstract

This paper presents a new memory gradient method for unconstrained optimization problems. This method makes use of the current and previous multi-step iteration information to generate a new iteration and add the freedom of some parameters. Therefore it is suitable to solve large scale unconstrained optimization problems. The global convergence is proved under some mild conditions. Numerical experiments show the algorithm is efficient in many situations.



This work is licensed under a Creative Commons Attribution 4.0 License.