Learning and Generalisation: With Applications to Neural Networks
Learning and Generalization provides a formal mathematical theory for addressing intuitive questions such as: • How does a machine learn a new concept on the basis of examples? • How can a neural network, after sufficient training, correctly predict the outcome of a previously unseen input? • How mu...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
London
Springer London
2003
|
Schriftenreihe: | Communications and Control Engineering
|
Schlagworte: | |
Online-Zugang: | FHI01 BTU01 Volltext |
Zusammenfassung: | Learning and Generalization provides a formal mathematical theory for addressing intuitive questions such as: • How does a machine learn a new concept on the basis of examples? • How can a neural network, after sufficient training, correctly predict the outcome of a previously unseen input? • How much training is required to achieve a specified level of accuracy in the prediction? • How can one identify the dynamical behaviour of a nonlinear control system by observing its input-output behaviour over a finite interval of time? In its successful first edition, A Theory of Learning and Generalization was the first book to treat the problem of machine learning in conjunction with the theory of empirical processes, the latter being a well-established branch of probability theory. The treatment of both topics side-by-side leads to new insights, as well as to new results in both topics. This second edition extends and improves upon this material, covering new areas including: • Support vector machines. • Fat-shattering dimensions and applications to neural network learning. • Learning with dependent samples generated by a beta-mixing process. • Connections between system identification and learning theory. • Probabilistic solution of 'intractable problems' in robust control and matrix theory using randomized algorithm. Reflecting advancements in the field, solutions to some of the open problems posed in the first edition are presented, while new open problems have been added. Learning and Generalization (second edition) is essential reading for control and system theorists, neural network researchers, theoretical computer scientists and probabilist |
Beschreibung: | 1 Online-Ressource (XXI, 488 p) |
ISBN: | 9781447137481 |
DOI: | 10.1007/978-1-4471-3748-1 |
Internformat
MARC
LEADER | 00000nmm a2200000zc 4500 | ||
---|---|---|---|
001 | BV045148777 | ||
003 | DE-604 | ||
005 | 00000000000000.0 | ||
007 | cr|uuu---uuuuu | ||
008 | 180827s2003 |||| o||u| ||||||eng d | ||
020 | |a 9781447137481 |9 978-1-4471-3748-1 | ||
024 | 7 | |a 10.1007/978-1-4471-3748-1 |2 doi | |
035 | |a (ZDB-2-ENG)978-1-4471-3748-1 | ||
035 | |a (OCoLC)1184488063 | ||
035 | |a (DE-599)BVBBV045148777 | ||
040 | |a DE-604 |b ger |e aacr | ||
041 | 0 | |a eng | |
049 | |a DE-573 |a DE-634 | ||
082 | 0 | |a 621.3 |2 23 | |
100 | 1 | |a Vidyasagar, M. |e Verfasser |4 aut | |
245 | 1 | 0 | |a Learning and Generalisation |b With Applications to Neural Networks |c by M. Vidyasagar |
264 | 1 | |a London |b Springer London |c 2003 | |
300 | |a 1 Online-Ressource (XXI, 488 p) | ||
336 | |b txt |2 rdacontent | ||
337 | |b c |2 rdamedia | ||
338 | |b cr |2 rdacarrier | ||
490 | 0 | |a Communications and Control Engineering | |
520 | |a Learning and Generalization provides a formal mathematical theory for addressing intuitive questions such as: • How does a machine learn a new concept on the basis of examples? • How can a neural network, after sufficient training, correctly predict the outcome of a previously unseen input? • How much training is required to achieve a specified level of accuracy in the prediction? • How can one identify the dynamical behaviour of a nonlinear control system by observing its input-output behaviour over a finite interval of time? In its successful first edition, A Theory of Learning and Generalization was the first book to treat the problem of machine learning in conjunction with the theory of empirical processes, the latter being a well-established branch of probability theory. The treatment of both topics side-by-side leads to new insights, as well as to new results in both topics. This second edition extends and improves upon this material, covering new areas including: • Support vector machines. • Fat-shattering dimensions and applications to neural network learning. • Learning with dependent samples generated by a beta-mixing process. • Connections between system identification and learning theory. • Probabilistic solution of 'intractable problems' in robust control and matrix theory using randomized algorithm. Reflecting advancements in the field, solutions to some of the open problems posed in the first edition are presented, while new open problems have been added. Learning and Generalization (second edition) is essential reading for control and system theorists, neural network researchers, theoretical computer scientists and probabilist | ||
650 | 4 | |a Engineering | |
650 | 4 | |a Electrical Engineering | |
650 | 4 | |a Control | |
650 | 4 | |a Systems Theory, Control | |
650 | 4 | |a Probability Theory and Stochastic Processes | |
650 | 4 | |a Group Theory and Generalizations | |
650 | 4 | |a Computer Communication Networks | |
650 | 4 | |a Engineering | |
650 | 4 | |a Computer communication systems | |
650 | 4 | |a Group theory | |
650 | 4 | |a System theory | |
650 | 4 | |a Probabilities | |
650 | 4 | |a Control engineering | |
650 | 4 | |a Electrical engineering | |
650 | 0 | 7 | |a Mathematische Lerntheorie |0 (DE-588)4169103-9 |2 gnd |9 rswk-swf |
689 | 0 | 0 | |a Mathematische Lerntheorie |0 (DE-588)4169103-9 |D s |
689 | 0 | |8 1\p |5 DE-604 | |
776 | 0 | 8 | |i Erscheint auch als |n Druck-Ausgabe |z 9781849968676 |
856 | 4 | 0 | |u https://doi.org/10.1007/978-1-4471-3748-1 |x Verlag |z URL des Erstveröffentlichers |3 Volltext |
912 | |a ZDB-2-ENG | ||
940 | 1 | |q ZDB-2-ENG_2000/2004 | |
999 | |a oai:aleph.bib-bvb.de:BVB01-030538476 | ||
883 | 1 | |8 1\p |a cgwrk |d 20201028 |q DE-101 |u https://d-nb.info/provenance/plan#cgwrk | |
966 | e | |u https://doi.org/10.1007/978-1-4471-3748-1 |l FHI01 |p ZDB-2-ENG |q ZDB-2-ENG_2000/2004 |x Verlag |3 Volltext | |
966 | e | |u https://doi.org/10.1007/978-1-4471-3748-1 |l BTU01 |p ZDB-2-ENG |q ZDB-2-ENG_Archiv |x Verlag |3 Volltext |
Datensatz im Suchindex
_version_ | 1804178819336110080 |
---|---|
any_adam_object | |
author | Vidyasagar, M. |
author_facet | Vidyasagar, M. |
author_role | aut |
author_sort | Vidyasagar, M. |
author_variant | m v mv |
building | Verbundindex |
bvnumber | BV045148777 |
collection | ZDB-2-ENG |
ctrlnum | (ZDB-2-ENG)978-1-4471-3748-1 (OCoLC)1184488063 (DE-599)BVBBV045148777 |
dewey-full | 621.3 |
dewey-hundreds | 600 - Technology (Applied sciences) |
dewey-ones | 621 - Applied physics |
dewey-raw | 621.3 |
dewey-search | 621.3 |
dewey-sort | 3621.3 |
dewey-tens | 620 - Engineering and allied operations |
discipline | Elektrotechnik / Elektronik / Nachrichtentechnik |
doi_str_mv | 10.1007/978-1-4471-3748-1 |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>03890nmm a2200601zc 4500</leader><controlfield tag="001">BV045148777</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">00000000000000.0</controlfield><controlfield tag="007">cr|uuu---uuuuu</controlfield><controlfield tag="008">180827s2003 |||| o||u| ||||||eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781447137481</subfield><subfield code="9">978-1-4471-3748-1</subfield></datafield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/978-1-4471-3748-1</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-2-ENG)978-1-4471-3748-1</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1184488063</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV045148777</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">aacr</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-573</subfield><subfield code="a">DE-634</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">621.3</subfield><subfield code="2">23</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Vidyasagar, M.</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Learning and Generalisation</subfield><subfield code="b">With Applications to Neural Networks</subfield><subfield code="c">by M. Vidyasagar</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">London</subfield><subfield code="b">Springer London</subfield><subfield code="c">2003</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 Online-Ressource (XXI, 488 p)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="0" ind2=" "><subfield code="a">Communications and Control Engineering</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Learning and Generalization provides a formal mathematical theory for addressing intuitive questions such as: • How does a machine learn a new concept on the basis of examples? • How can a neural network, after sufficient training, correctly predict the outcome of a previously unseen input? • How much training is required to achieve a specified level of accuracy in the prediction? • How can one identify the dynamical behaviour of a nonlinear control system by observing its input-output behaviour over a finite interval of time? In its successful first edition, A Theory of Learning and Generalization was the first book to treat the problem of machine learning in conjunction with the theory of empirical processes, the latter being a well-established branch of probability theory. The treatment of both topics side-by-side leads to new insights, as well as to new results in both topics. This second edition extends and improves upon this material, covering new areas including: • Support vector machines. • Fat-shattering dimensions and applications to neural network learning. • Learning with dependent samples generated by a beta-mixing process. • Connections between system identification and learning theory. • Probabilistic solution of 'intractable problems' in robust control and matrix theory using randomized algorithm. Reflecting advancements in the field, solutions to some of the open problems posed in the first edition are presented, while new open problems have been added. Learning and Generalization (second edition) is essential reading for control and system theorists, neural network researchers, theoretical computer scientists and probabilist</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Engineering</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Electrical Engineering</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Control</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Systems Theory, Control</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Probability Theory and Stochastic Processes</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Group Theory and Generalizations</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Computer Communication Networks</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Engineering</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Computer communication systems</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Group theory</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">System theory</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Probabilities</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Control engineering</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Electrical engineering</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Mathematische Lerntheorie</subfield><subfield code="0">(DE-588)4169103-9</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Mathematische Lerntheorie</subfield><subfield code="0">(DE-588)4169103-9</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="8">1\p</subfield><subfield code="5">DE-604</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Druck-Ausgabe</subfield><subfield code="z">9781849968676</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1007/978-1-4471-3748-1</subfield><subfield code="x">Verlag</subfield><subfield code="z">URL des Erstveröffentlichers</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-2-ENG</subfield></datafield><datafield tag="940" ind1="1" ind2=" "><subfield code="q">ZDB-2-ENG_2000/2004</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-030538476</subfield></datafield><datafield tag="883" ind1="1" ind2=" "><subfield code="8">1\p</subfield><subfield code="a">cgwrk</subfield><subfield code="d">20201028</subfield><subfield code="q">DE-101</subfield><subfield code="u">https://d-nb.info/provenance/plan#cgwrk</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://doi.org/10.1007/978-1-4471-3748-1</subfield><subfield code="l">FHI01</subfield><subfield code="p">ZDB-2-ENG</subfield><subfield code="q">ZDB-2-ENG_2000/2004</subfield><subfield code="x">Verlag</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://doi.org/10.1007/978-1-4471-3748-1</subfield><subfield code="l">BTU01</subfield><subfield code="p">ZDB-2-ENG</subfield><subfield code="q">ZDB-2-ENG_Archiv</subfield><subfield code="x">Verlag</subfield><subfield code="3">Volltext</subfield></datafield></record></collection> |
id | DE-604.BV045148777 |
illustrated | Not Illustrated |
indexdate | 2024-07-10T08:10:02Z |
institution | BVB |
isbn | 9781447137481 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-030538476 |
oclc_num | 1184488063 |
open_access_boolean | |
owner | DE-573 DE-634 |
owner_facet | DE-573 DE-634 |
physical | 1 Online-Ressource (XXI, 488 p) |
psigel | ZDB-2-ENG ZDB-2-ENG_2000/2004 ZDB-2-ENG ZDB-2-ENG_2000/2004 ZDB-2-ENG ZDB-2-ENG_Archiv |
publishDate | 2003 |
publishDateSearch | 2003 |
publishDateSort | 2003 |
publisher | Springer London |
record_format | marc |
series2 | Communications and Control Engineering |
spelling | Vidyasagar, M. Verfasser aut Learning and Generalisation With Applications to Neural Networks by M. Vidyasagar London Springer London 2003 1 Online-Ressource (XXI, 488 p) txt rdacontent c rdamedia cr rdacarrier Communications and Control Engineering Learning and Generalization provides a formal mathematical theory for addressing intuitive questions such as: • How does a machine learn a new concept on the basis of examples? • How can a neural network, after sufficient training, correctly predict the outcome of a previously unseen input? • How much training is required to achieve a specified level of accuracy in the prediction? • How can one identify the dynamical behaviour of a nonlinear control system by observing its input-output behaviour over a finite interval of time? In its successful first edition, A Theory of Learning and Generalization was the first book to treat the problem of machine learning in conjunction with the theory of empirical processes, the latter being a well-established branch of probability theory. The treatment of both topics side-by-side leads to new insights, as well as to new results in both topics. This second edition extends and improves upon this material, covering new areas including: • Support vector machines. • Fat-shattering dimensions and applications to neural network learning. • Learning with dependent samples generated by a beta-mixing process. • Connections between system identification and learning theory. • Probabilistic solution of 'intractable problems' in robust control and matrix theory using randomized algorithm. Reflecting advancements in the field, solutions to some of the open problems posed in the first edition are presented, while new open problems have been added. Learning and Generalization (second edition) is essential reading for control and system theorists, neural network researchers, theoretical computer scientists and probabilist Engineering Electrical Engineering Control Systems Theory, Control Probability Theory and Stochastic Processes Group Theory and Generalizations Computer Communication Networks Computer communication systems Group theory System theory Probabilities Control engineering Electrical engineering Mathematische Lerntheorie (DE-588)4169103-9 gnd rswk-swf Mathematische Lerntheorie (DE-588)4169103-9 s 1\p DE-604 Erscheint auch als Druck-Ausgabe 9781849968676 https://doi.org/10.1007/978-1-4471-3748-1 Verlag URL des Erstveröffentlichers Volltext 1\p cgwrk 20201028 DE-101 https://d-nb.info/provenance/plan#cgwrk |
spellingShingle | Vidyasagar, M. Learning and Generalisation With Applications to Neural Networks Engineering Electrical Engineering Control Systems Theory, Control Probability Theory and Stochastic Processes Group Theory and Generalizations Computer Communication Networks Computer communication systems Group theory System theory Probabilities Control engineering Electrical engineering Mathematische Lerntheorie (DE-588)4169103-9 gnd |
subject_GND | (DE-588)4169103-9 |
title | Learning and Generalisation With Applications to Neural Networks |
title_auth | Learning and Generalisation With Applications to Neural Networks |
title_exact_search | Learning and Generalisation With Applications to Neural Networks |
title_full | Learning and Generalisation With Applications to Neural Networks by M. Vidyasagar |
title_fullStr | Learning and Generalisation With Applications to Neural Networks by M. Vidyasagar |
title_full_unstemmed | Learning and Generalisation With Applications to Neural Networks by M. Vidyasagar |
title_short | Learning and Generalisation |
title_sort | learning and generalisation with applications to neural networks |
title_sub | With Applications to Neural Networks |
topic | Engineering Electrical Engineering Control Systems Theory, Control Probability Theory and Stochastic Processes Group Theory and Generalizations Computer Communication Networks Computer communication systems Group theory System theory Probabilities Control engineering Electrical engineering Mathematische Lerntheorie (DE-588)4169103-9 gnd |
topic_facet | Engineering Electrical Engineering Control Systems Theory, Control Probability Theory and Stochastic Processes Group Theory and Generalizations Computer Communication Networks Computer communication systems Group theory System theory Probabilities Control engineering Electrical engineering Mathematische Lerntheorie |
url | https://doi.org/10.1007/978-1-4471-3748-1 |
work_keys_str_mv | AT vidyasagarm learningandgeneralisationwithapplicationstoneuralnetworks |