Neural network learning: theoretical foundations

This book describes theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Research on pattern classification with binary-output networks is surveyed, including...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Anthony, Martin 1967- (VerfasserIn)
Format: Elektronisch E-Book
Sprache:English
Veröffentlicht: Cambridge Cambridge University Press 1999
Schlagworte:
Online-Zugang:BSB01
FHN01
UER01
UPA01
Volltext
Zusammenfassung:This book describes theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Research on pattern classification with binary-output networks is surveyed, including a discussion of the relevance of the Vapnik–Chervonenkis dimension, and calculating estimates of the dimension for several neural network models. A model of classification by real-output networks is developed, and the usefulness of classification with a 'large margin' is demonstrated. The authors explain the role of scale-sensitive versions of the Vapnik–Chervonenkis dimension in large margin classification, and in real prediction. They also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient constructive learning algorithms. The book is self-contained and is intended to be accessible to researchers and graduate students in computer science, engineering, and mathematics
Beschreibung:Title from publisher's bibliographic system (viewed on 05 Oct 2015)
Beschreibung:1 online resource (xiv, 389 pages)
ISBN:9780511624216
DOI:10.1017/CBO9780511624216

Es ist kein Print-Exemplar vorhanden.

Fernleihe Bestellen Achtung: Nicht im THWS-Bestand! Volltext öffnen