How tight are the Vapnik-Chervonenkis bounds?:

Abstract: "We describe a series of careful numerical experiments which measure the average generalization capability of neural networks trained on a variety of simple funtions. These experiments are designed to test the relationship between average generalization performance and the worst-case...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Cohn, David 1954- (VerfasserIn), Tesauro, Gerald (VerfasserIn)
Format: Buch
Sprache:English
Veröffentlicht: Seattle, Wash. 1991
Schriftenreihe:University of Washington <Seattle, Wash.> / Department of Computer Science: Technical report 91,3,4
Schlagworte:
Zusammenfassung:Abstract: "We describe a series of careful numerical experiments which measure the average generalization capability of neural networks trained on a variety of simple funtions. These experiments are designed to test the relationship between average generalization performance and the worst-case bounds obtained from formal theory using the Vapnik-Chervonenkis dimension (Blumer et al., 1989). Recent statistical learning theories (Tishby et al., 1989; Schwartz et al., 1990) suggest that surpassing these bounds might be possible if the spectrum of possible generalizations has a 'gap' near perfect performance. We indeed find that, in some cases, the average generalization is significantly better than the VC bound: the approach to perfect performance is exponential in the number of examples m, rather than the 1/m result of the bound
However, in these cases, we have not found evidence of the gap predicted by the above satistical theories. In other cases, we do find the 1/m behavior of the VC bound, and in these cases, the numerical prefactor is closely related to prefactor contained in the bound.
Beschreibung:12 Bl.

Es ist kein Print-Exemplar vorhanden.

Fernleihe Bestellen Achtung: Nicht im THWS-Bestand!