Improving Conjugate Gradient method For Training Feed Forward Neural Networks
Abstract
In this paper, many modified and new algorithms have been proposed for training feed forward neural networks, many of them having a very fast convergence rate for reasonable size networks.
In all of these algorithms we use the gradient of the performance function (energy function, error function) to determine how to adjust the weights such that the performance function is minimized, where the back propagation algorithm has been used to increase the speed of training. The above algorithms have a variety of different computation and thus different type of form of search direction and storage requirements, and all the above algorithms applied in approximation problem.