A New Method for Unconstrained Optimization Problem

Zhiguang Zhang

Abstract


This paper presents a new memory gradient method for unconstrained optimization problems. This method makes use of the current and previous multi-step iteration information to generate a new iteration and add the freedom of some parameters. Therefore it is suitable to solve large scale unconstrained optimization problems. The global convergence is proved under some mild conditions. Numerical experiments show the algorithm is efficient in many situations.


Full Text: PDF

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.

Modern Applied Science   ISSN 1913-1844 (Print)   ISSN 1913-1852 (Online)

Copyright © Canadian Center of Science and Education

To make sure that you can receive messages from us, please add the 'ccsenet.org' domain to your e-mail 'safe list'. If you do not receive e-mail in your 'inbox', check your 'bulk mail' or 'junk mail' folders.