How tight are the Vapnik-Chervonenkis bounds?:
Abstract: "We describe a series of careful numerical experiments which measure the average generalization capability of neural networks trained on a variety of simple funtions. These experiments are designed to test the relationship between average generalization performance and the worst-case...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Seattle, Wash.
1991
|
Schriftenreihe: | University of Washington <Seattle, Wash.> / Department of Computer Science: Technical report
91,3,4 |
Schlagworte: | |
Zusammenfassung: | Abstract: "We describe a series of careful numerical experiments which measure the average generalization capability of neural networks trained on a variety of simple funtions. These experiments are designed to test the relationship between average generalization performance and the worst-case bounds obtained from formal theory using the Vapnik-Chervonenkis dimension (Blumer et al., 1989). Recent statistical learning theories (Tishby et al., 1989; Schwartz et al., 1990) suggest that surpassing these bounds might be possible if the spectrum of possible generalizations has a 'gap' near perfect performance. We indeed find that, in some cases, the average generalization is significantly better than the VC bound: the approach to perfect performance is exponential in the number of examples m, rather than the 1/m result of the bound However, in these cases, we have not found evidence of the gap predicted by the above satistical theories. In other cases, we do find the 1/m behavior of the VC bound, and in these cases, the numerical prefactor is closely related to prefactor contained in the bound. |
Beschreibung: | 12 Bl. |
Internformat
MARC
LEADER | 00000nam a2200000 cb4500 | ||
---|---|---|---|
001 | BV008992810 | ||
003 | DE-604 | ||
005 | 00000000000000.0 | ||
007 | t | ||
008 | 940206s1991 |||| 00||| eng d | ||
035 | |a (OCoLC)28390647 | ||
035 | |a (DE-599)BVBBV008992810 | ||
040 | |a DE-604 |b ger |e rakddb | ||
041 | 0 | |a eng | |
049 | |a DE-29T | ||
100 | 1 | |a Cohn, David |d 1954- |e Verfasser |0 (DE-588)121775275 |4 aut | |
245 | 1 | 0 | |a How tight are the Vapnik-Chervonenkis bounds? |c David Cohn ; Gerald Tesauro |
264 | 1 | |a Seattle, Wash. |c 1991 | |
300 | |a 12 Bl. | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
490 | 1 | |a University of Washington <Seattle, Wash.> / Department of Computer Science: Technical report |v 91,3,4 | |
520 | 3 | |a Abstract: "We describe a series of careful numerical experiments which measure the average generalization capability of neural networks trained on a variety of simple funtions. These experiments are designed to test the relationship between average generalization performance and the worst-case bounds obtained from formal theory using the Vapnik-Chervonenkis dimension (Blumer et al., 1989). Recent statistical learning theories (Tishby et al., 1989; Schwartz et al., 1990) suggest that surpassing these bounds might be possible if the spectrum of possible generalizations has a 'gap' near perfect performance. We indeed find that, in some cases, the average generalization is significantly better than the VC bound: the approach to perfect performance is exponential in the number of examples m, rather than the 1/m result of the bound | |
520 | 3 | |a However, in these cases, we have not found evidence of the gap predicted by the above satistical theories. In other cases, we do find the 1/m behavior of the VC bound, and in these cases, the numerical prefactor is closely related to prefactor contained in the bound. | |
650 | 4 | |a Neural networks (Computer science) | |
700 | 1 | |a Tesauro, Gerald |e Verfasser |4 aut | |
810 | 2 | |a Department of Computer Science: Technical report |t University of Washington <Seattle, Wash.> |v 91,3,4 |w (DE-604)BV008930431 |9 91,3,4 | |
999 | |a oai:aleph.bib-bvb.de:BVB01-005941728 |
Datensatz im Suchindex
_version_ | 1804123335333773312 |
---|---|
any_adam_object | |
author | Cohn, David 1954- Tesauro, Gerald |
author_GND | (DE-588)121775275 |
author_facet | Cohn, David 1954- Tesauro, Gerald |
author_role | aut aut |
author_sort | Cohn, David 1954- |
author_variant | d c dc g t gt |
building | Verbundindex |
bvnumber | BV008992810 |
ctrlnum | (OCoLC)28390647 (DE-599)BVBBV008992810 |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02179nam a2200313 cb4500</leader><controlfield tag="001">BV008992810</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">00000000000000.0</controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">940206s1991 |||| 00||| eng d</controlfield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)28390647</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV008992810</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rakddb</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-29T</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Cohn, David</subfield><subfield code="d">1954-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)121775275</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">How tight are the Vapnik-Chervonenkis bounds?</subfield><subfield code="c">David Cohn ; Gerald Tesauro</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Seattle, Wash.</subfield><subfield code="c">1991</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">12 Bl.</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">University of Washington <Seattle, Wash.> / Department of Computer Science: Technical report</subfield><subfield code="v">91,3,4</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">Abstract: "We describe a series of careful numerical experiments which measure the average generalization capability of neural networks trained on a variety of simple funtions. These experiments are designed to test the relationship between average generalization performance and the worst-case bounds obtained from formal theory using the Vapnik-Chervonenkis dimension (Blumer et al., 1989). Recent statistical learning theories (Tishby et al., 1989; Schwartz et al., 1990) suggest that surpassing these bounds might be possible if the spectrum of possible generalizations has a 'gap' near perfect performance. We indeed find that, in some cases, the average generalization is significantly better than the VC bound: the approach to perfect performance is exponential in the number of examples m, rather than the 1/m result of the bound</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">However, in these cases, we have not found evidence of the gap predicted by the above satistical theories. In other cases, we do find the 1/m behavior of the VC bound, and in these cases, the numerical prefactor is closely related to prefactor contained in the bound.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Neural networks (Computer science)</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Tesauro, Gerald</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="810" ind1="2" ind2=" "><subfield code="a">Department of Computer Science: Technical report</subfield><subfield code="t">University of Washington <Seattle, Wash.></subfield><subfield code="v">91,3,4</subfield><subfield code="w">(DE-604)BV008930431</subfield><subfield code="9">91,3,4</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-005941728</subfield></datafield></record></collection> |
id | DE-604.BV008992810 |
illustrated | Not Illustrated |
indexdate | 2024-07-09T17:28:08Z |
institution | BVB |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-005941728 |
oclc_num | 28390647 |
open_access_boolean | |
owner | DE-29T |
owner_facet | DE-29T |
physical | 12 Bl. |
publishDate | 1991 |
publishDateSearch | 1991 |
publishDateSort | 1991 |
record_format | marc |
series2 | University of Washington <Seattle, Wash.> / Department of Computer Science: Technical report |
spelling | Cohn, David 1954- Verfasser (DE-588)121775275 aut How tight are the Vapnik-Chervonenkis bounds? David Cohn ; Gerald Tesauro Seattle, Wash. 1991 12 Bl. txt rdacontent n rdamedia nc rdacarrier University of Washington <Seattle, Wash.> / Department of Computer Science: Technical report 91,3,4 Abstract: "We describe a series of careful numerical experiments which measure the average generalization capability of neural networks trained on a variety of simple funtions. These experiments are designed to test the relationship between average generalization performance and the worst-case bounds obtained from formal theory using the Vapnik-Chervonenkis dimension (Blumer et al., 1989). Recent statistical learning theories (Tishby et al., 1989; Schwartz et al., 1990) suggest that surpassing these bounds might be possible if the spectrum of possible generalizations has a 'gap' near perfect performance. We indeed find that, in some cases, the average generalization is significantly better than the VC bound: the approach to perfect performance is exponential in the number of examples m, rather than the 1/m result of the bound However, in these cases, we have not found evidence of the gap predicted by the above satistical theories. In other cases, we do find the 1/m behavior of the VC bound, and in these cases, the numerical prefactor is closely related to prefactor contained in the bound. Neural networks (Computer science) Tesauro, Gerald Verfasser aut Department of Computer Science: Technical report University of Washington <Seattle, Wash.> 91,3,4 (DE-604)BV008930431 91,3,4 |
spellingShingle | Cohn, David 1954- Tesauro, Gerald How tight are the Vapnik-Chervonenkis bounds? Neural networks (Computer science) |
title | How tight are the Vapnik-Chervonenkis bounds? |
title_auth | How tight are the Vapnik-Chervonenkis bounds? |
title_exact_search | How tight are the Vapnik-Chervonenkis bounds? |
title_full | How tight are the Vapnik-Chervonenkis bounds? David Cohn ; Gerald Tesauro |
title_fullStr | How tight are the Vapnik-Chervonenkis bounds? David Cohn ; Gerald Tesauro |
title_full_unstemmed | How tight are the Vapnik-Chervonenkis bounds? David Cohn ; Gerald Tesauro |
title_short | How tight are the Vapnik-Chervonenkis bounds? |
title_sort | how tight are the vapnik chervonenkis bounds |
topic | Neural networks (Computer science) |
topic_facet | Neural networks (Computer science) |
volume_link | (DE-604)BV008930431 |
work_keys_str_mv | AT cohndavid howtightarethevapnikchervonenkisbounds AT tesaurogerald howtightarethevapnikchervonenkisbounds |