The numerical solution of neural network training problems:

Abstract: "The training problem for feedforward neural networks is nonlinear parameter estimation that can be solved by a variety of optimization techniques. Much of the literature on neural networks has focused on backpropagation which is a variant of gradient descent. The training of neural n...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Saarinen, S. (VerfasserIn), Bramley, R. (VerfasserIn), Cybenko, George (VerfasserIn)
Format: Buch
Sprache:English
Veröffentlicht: Urbana, Ill. 1991
Schriftenreihe:Center for Supercomputing Research and Development <Urbana, Ill.>: CSRD report 1089
Schlagworte:
Zusammenfassung:Abstract: "The training problem for feedforward neural networks is nonlinear parameter estimation that can be solved by a variety of optimization techniques. Much of the literature on neural networks has focused on backpropagation which is a variant of gradient descent. The training of neural networks using backpropagation is known to be a slow process with more sophisticated techniques not always performing significantly better. In this paper, we show that feedforward neural networks can have ill-conditioned Hessians and that this ill-conditioning can be quite common
The analysis and experimental results in this paper lead to the conclusion that many network training problems are ill-conditioned and may not be solved more efficiently by higher order optimization methods. While our analyses are for completely connected networks, they extend to networks with sparse connectivity as well. Our results suggest that neural networks can have considerable redundancy in parameterizing the function space in a neighborhood of a local minimum, independently of whether or not the solution has a small residual.

Es ist kein Print-Exemplar vorhanden.

Fernleihe Bestellen Achtung: Nicht im THWS-Bestand!