Analogue imprecision in MLP training /:

Hardware inaccuracy and imprecision are important considerations when implementing neural algorithms. This book presents a study of synaptic weight noise as a typical fault model for analogue VLSI realisations of MLP neural networks and examines the implications for learning and network performance....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Edwards, Peter J. (Peter John)
Weitere Verfasser: Murray, Alan F.
Format: Elektronisch E-Book
Sprache:English
Veröffentlicht: Singapore ; River Edge, NJ : World Scientific, ©1996.
Schriftenreihe:Progress in neural processing ; 4.
Schlagworte:
Online-Zugang:Volltext
Zusammenfassung:Hardware inaccuracy and imprecision are important considerations when implementing neural algorithms. This book presents a study of synaptic weight noise as a typical fault model for analogue VLSI realisations of MLP neural networks and examines the implications for learning and network performance. The aim of the book is to present a study of how including an imprecision model into a learning scheme as a "fault tolerance hint" can aid understanding of accuracy and precision requirements for a particular implementation. In addition the study shows how such a scheme can give rise to significant performance enhancement.
Beschreibung:1 online resource (xi, 178 pages) : illustrations
Bibliographie:Includes bibliographical references (pages 165-172) and index.
ISBN:9789812830012
9812830014

Es ist kein Print-Exemplar vorhanden.

Volltext öffnen