Parallel implementations of backpropagation neural networks on transputers: a study of training set parallelism
Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Saratchandran, P. (VerfasserIn)
Format: Elektronisch E-Book
Sprache:English
Veröffentlicht: Singapore World Scientific c1996
Schriftenreihe:Progress in neural processing 3
Schlagworte:
Online-Zugang:Volltext
Beschreibung:Includes bibliographical references (p. 189-199) and index
1. Introduction. 1.1. Multilayer feedforward neural networks. 1.2. The basic BP algorithm. 1.3. Parallelism in the BP algorithm. 1.4. Some parallel implementations -- 2. Transputer topologies for parallel implementation. 2.1. The transputer. 2.2. Topologies. 2.3. Topology chosen in this study. 2.4 .Software used. 2.5. Performance metrics and benchmark problems -- 3. Development of a theoretical model for training set parallelism in a homogeneous array of transputers. 3.1. Time components of parallel transputer implementation. 3.2. Timing aspects of parallelizing the backpropagation algorithm. 3.3. Time components for the parallelized backpropagation algorithm. 3.4. Validation of the Tepoch model --
- 4. Equal distribution of patterns amongst a homogeneous array of transputers. 4.1. Analytical model for time per epoch. 4.2. Validation of the model for equal distribution. 4.3. Optimal number of transputers needed for the case of equal distribution. 4.4. Cost benefits analysis of adding additional processors -- 5. Optimization model for unequal distribution of patterns in a homogeneous array of transputers. 5.1. Constraints for optimization. 5.2. Optimal pattern distribution. 5.3. Validation of the pattern optimization model. 5.4. Experimental results for benchmark problems. 5.5. Locating surplus processors and to find out the optimal number of processors needed to obtain minimum time per epoch -- 6. Optimization model for unequal distribution of patterns in a heterogeneous array of transputers. 6.1. Experimental results for benchmark problems. 6.2. Statistical verification of the optimal epoch time. 6.3. Discussion --
- 7. Pattern allocation schemes using genetic algorithm. 7.1. Optimization algorithm and computational complexity. 7.2. Solution time for optimal pattern. 7.3. Sub-optimal method: Heuristic distribution. 7.4. Genetic algorithm for pattern allocation. 7.5. Comparison between genetic algorithm and MIP. 7.6. Inclusion of 'A Priori' information. 7.7. G.A. with the proposed stopping criterion versus MIP -- A. Comparison between pipelined ring topology and ring topology. A1. Theoretical optimal epoch time for pipelined ring topology. A.2. Theoretical optimal epoch time for ring topology. A.3. Comparison between pipelined ring topology and ring topology -- B. A sample parallel C program -- C. The branch and bound method for solving mixed integer programming problems
This book presents a systematic approach to parallel implementation of feedforward neural networks on an array of transputers. The emphasis is on backpropagation learning and training set parallelism. Using systematic analysis, a theoretical model has been developed for the parallel implementation. The model is used to find the optimal mapping to minimize the training time for large backpropagation neural networks. The model has been validated experimentally on several well known benchmark problems. Use of genetic algorithms for optimizing the performance of the parallel implementations is described. Guidelines for efficient parallel implementations are highlighted
Beschreibung:1 Online-Ressource (xviii, 202 p.)
ISBN:9789812814968
9812814965
9810226543
9789810226541

Es ist kein Print-Exemplar vorhanden.

Fernleihe Bestellen Achtung: Nicht im THWS-Bestand! Volltext öffnen