Parallel implementations of backpropagation neural networks on transputers :: a study of training set parallelism /
This book presents a systematic approach to parallel implementation of feedforward neural networks on an array of transputers. The emphasis is on backpropagation learning and training set parallelism. Using systematic analysis, a theoretical model has been developed for the parallel implementation....
Gespeichert in:
1. Verfasser: | |
---|---|
Weitere Verfasser: | , |
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Singapore ; River Edge, NJ :
World Scientific,
©1996.
|
Schriftenreihe: | Progress in neural processing ;
3. |
Schlagworte: | |
Online-Zugang: | Volltext |
Zusammenfassung: | This book presents a systematic approach to parallel implementation of feedforward neural networks on an array of transputers. The emphasis is on backpropagation learning and training set parallelism. Using systematic analysis, a theoretical model has been developed for the parallel implementation. The model is used to find the optimal mapping to minimize the training time for large backpropagation neural networks. The model has been validated experimentally on several well known benchmark problems. Use of genetic algorithms for optimizing the performance of the parallel implementations is described. Guidelines for efficient parallel implementations are highlighted. |
Beschreibung: | 1 online resource (xviii, 202 pages) |
Bibliographie: | Includes bibliographical references (pages 189-199) and index. |
ISBN: | 9789812814968 9812814965 |
Internformat
MARC
LEADER | 00000cam a2200000 a 4500 | ||
---|---|---|---|
001 | ZDB-4-EBU-ocn828423959 | ||
003 | OCoLC | ||
005 | 20241004212047.0 | ||
006 | m o d | ||
007 | cr cnu---unuuu | ||
008 | 130225s1996 si ob 001 0 eng d | ||
040 | |a N$T |b eng |e pn |c N$T |d IDEBK |d E7B |d I9W |d OCLCF |d YDXCP |d OCLCQ |d AGLDB |d OCLCQ |d STF |d UKAHL |d LEAUB |d OCLCQ |d OCLCO |d OCLCQ |d OCLCO |d SXB |d OCLCQ | ||
019 | |a 1086410831 | ||
020 | |a 9789812814968 |q (electronic bk.) | ||
020 | |a 9812814965 |q (electronic bk.) | ||
020 | |z 9810226543 | ||
020 | |z 9789810226541 | ||
035 | |a (OCoLC)828423959 |z (OCoLC)1086410831 | ||
050 | 4 | |a QA76.58 |b .P3773 1996eb | |
072 | 7 | |a COM |x 005030 |2 bisacsh | |
072 | 7 | |a COM |x 004000 |2 bisacsh | |
082 | 7 | |a 006.3 |2 22 | |
084 | |a ST 300 |2 rvk | ||
084 | |a DAT 717d |2 stub | ||
084 | |a DAT 217d |2 stub | ||
049 | |a MAIN | ||
100 | 1 | |a Saratchandran, P. | |
245 | 1 | 0 | |a Parallel implementations of backpropagation neural networks on transputers : |b a study of training set parallelism / |c editors, P. Saratchandran, N. Sundararajan, Shou King Foo. |
260 | |a Singapore ; |a River Edge, NJ : |b World Scientific, |c ©1996. | ||
300 | |a 1 online resource (xviii, 202 pages) | ||
336 | |a text |b txt |2 rdacontent | ||
337 | |a computer |b c |2 rdamedia | ||
338 | |a online resource |b cr |2 rdacarrier | ||
490 | 1 | |a Progress in neural processing ; |v 3 | |
504 | |a Includes bibliographical references (pages 189-199) and index. | ||
588 | 0 | |a Print version record. | |
505 | 0 | |a 1. Introduction. 1.1. Multilayer feedforward neural networks. 1.2. The basic BP algorithm. 1.3. Parallelism in the BP algorithm. 1.4. Some parallel implementations -- 2. Transputer topologies for parallel implementation. 2.1. The transputer. 2.2. Topologies. 2.3. Topology chosen in this study. 2.4 .Software used. 2.5. Performance metrics and benchmark problems -- 3. Development of a theoretical model for training set parallelism in a homogeneous array of transputers. 3.1. Time components of parallel transputer implementation. 3.2. Timing aspects of parallelizing the backpropagation algorithm. 3.3. Time components for the parallelized backpropagation algorithm. 3.4. Validation of the Tepoch model -- 4. Equal distribution of patterns amongst a homogeneous array of transputers. 4.1. Analytical model for time per epoch. 4.2. Validation of the model for equal distribution. 4.3. Optimal number of transputers needed for the case of equal distribution. 4.4. Cost benefits analysis of adding additional processors -- 5. Optimization model for unequal distribution of patterns in a homogeneous array of transputers. 5.1. Constraints for optimization. 5.2. Optimal pattern distribution. 5.3. Validation of the pattern optimization model. 5.4. Experimental results for benchmark problems. 5.5. Locating surplus processors and to find out the optimal number of processors needed to obtain minimum time per epoch -- 6. Optimization model for unequal distribution of patterns in a heterogeneous array of transputers. 6.1. Experimental results for benchmark problems. 6.2. Statistical verification of the optimal epoch time. 6.3. Discussion -- 7. Pattern allocation schemes using genetic algorithm. 7.1. Optimization algorithm and computational complexity. 7.2. Solution time for optimal pattern. 7.3. Sub-optimal method: Heuristic distribution. 7.4. Genetic algorithm for pattern allocation. 7.5. Comparison between genetic algorithm and MIP. 7.6. Inclusion of 'A Priori' information. 7.7. G.A. with the proposed stopping criterion versus MIP -- A. Comparison between pipelined ring topology and ring topology. A1. Theoretical optimal epoch time for pipelined ring topology. A.2. Theoretical optimal epoch time for ring topology. A.3. Comparison between pipelined ring topology and ring topology -- B.A sample parallel C program -- C. The branch and bound method for solving mixed integer programming problems. | |
520 | |a This book presents a systematic approach to parallel implementation of feedforward neural networks on an array of transputers. The emphasis is on backpropagation learning and training set parallelism. Using systematic analysis, a theoretical model has been developed for the parallel implementation. The model is used to find the optimal mapping to minimize the training time for large backpropagation neural networks. The model has been validated experimentally on several well known benchmark problems. Use of genetic algorithms for optimizing the performance of the parallel implementations is described. Guidelines for efficient parallel implementations are highlighted. | ||
650 | 0 | |a Parallel processing (Electronic computers) |0 http://id.loc.gov/authorities/subjects/sh85097826 | |
650 | 0 | |a Neural networks (Computer science) |0 http://id.loc.gov/authorities/subjects/sh90001937 | |
650 | 0 | |a Back propagation (Artificial intelligence) |0 http://id.loc.gov/authorities/subjects/sh94008320 | |
650 | 0 | |a Transputers. |0 http://id.loc.gov/authorities/subjects/sh89002015 | |
650 | 2 | |a Neural Networks, Computer |0 https://id.nlm.nih.gov/mesh/D016571 | |
650 | 6 | |a Parallélisme (Informatique) | |
650 | 6 | |a Réseaux neuronaux (Informatique) | |
650 | 6 | |a Rétropropagation (Intelligence artificielle) | |
650 | 6 | |a Transputers. | |
650 | 7 | |a COMPUTERS |x Enterprise Applications |x Business Intelligence Tools. |2 bisacsh | |
650 | 7 | |a COMPUTERS |x Intelligence (AI) & Semantics. |2 bisacsh | |
650 | 7 | |a Back propagation (Artificial intelligence) |2 fast | |
650 | 7 | |a Neural networks (Computer science) |2 fast | |
650 | 7 | |a Parallel processing (Electronic computers) |2 fast | |
650 | 7 | |a Transputers |2 fast | |
700 | 1 | |a Sundararajan, N. | |
700 | 1 | |a Foo, Shou King. | |
776 | 0 | 8 | |i Print version: |a Saratchandran, P. |t Parallel implementations of backpropagation neural networks on transputers. |d Singapore ; River Edge, NJ : World Scientific, ©1996 |z 9810226543 |w (DLC) 96012119 |w (OCoLC)34477043 |
830 | 0 | |a Progress in neural processing ; |v 3. |0 http://id.loc.gov/authorities/names/no95053043 | |
856 | 4 | 0 | |l FWS01 |p ZDB-4-EBU |q FWS_PDA_EBU |u https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=532589 |3 Volltext |
938 | |a Askews and Holts Library Services |b ASKH |n AH24685798 | ||
938 | |a ebrary |b EBRY |n ebr10701203 | ||
938 | |a EBSCOhost |b EBSC |n 532589 | ||
938 | |a ProQuest MyiLibrary Digital eBook Collection |b IDEB |n cis25679527 | ||
938 | |a YBP Library Services |b YANK |n 10252925 | ||
994 | |a 92 |b GEBAY | ||
912 | |a ZDB-4-EBU | ||
049 | |a DE-863 |
Datensatz im Suchindex
DE-BY-FWS_katkey | ZDB-4-EBU-ocn828423959 |
---|---|
_version_ | 1816796907745312768 |
adam_text | |
any_adam_object | |
author | Saratchandran, P. |
author2 | Sundararajan, N. Foo, Shou King |
author2_role | |
author2_variant | n s ns s k f sk skf |
author_facet | Saratchandran, P. Sundararajan, N. Foo, Shou King |
author_role | |
author_sort | Saratchandran, P. |
author_variant | p s ps |
building | Verbundindex |
bvnumber | localFWS |
callnumber-first | Q - Science |
callnumber-label | QA76 |
callnumber-raw | QA76.58 .P3773 1996eb |
callnumber-search | QA76.58 .P3773 1996eb |
callnumber-sort | QA 276.58 P3773 41996EB |
callnumber-subject | QA - Mathematics |
classification_rvk | ST 300 |
classification_tum | DAT 717d DAT 217d |
collection | ZDB-4-EBU |
contents | 1. Introduction. 1.1. Multilayer feedforward neural networks. 1.2. The basic BP algorithm. 1.3. Parallelism in the BP algorithm. 1.4. Some parallel implementations -- 2. Transputer topologies for parallel implementation. 2.1. The transputer. 2.2. Topologies. 2.3. Topology chosen in this study. 2.4 .Software used. 2.5. Performance metrics and benchmark problems -- 3. Development of a theoretical model for training set parallelism in a homogeneous array of transputers. 3.1. Time components of parallel transputer implementation. 3.2. Timing aspects of parallelizing the backpropagation algorithm. 3.3. Time components for the parallelized backpropagation algorithm. 3.4. Validation of the Tepoch model -- 4. Equal distribution of patterns amongst a homogeneous array of transputers. 4.1. Analytical model for time per epoch. 4.2. Validation of the model for equal distribution. 4.3. Optimal number of transputers needed for the case of equal distribution. 4.4. Cost benefits analysis of adding additional processors -- 5. Optimization model for unequal distribution of patterns in a homogeneous array of transputers. 5.1. Constraints for optimization. 5.2. Optimal pattern distribution. 5.3. Validation of the pattern optimization model. 5.4. Experimental results for benchmark problems. 5.5. Locating surplus processors and to find out the optimal number of processors needed to obtain minimum time per epoch -- 6. Optimization model for unequal distribution of patterns in a heterogeneous array of transputers. 6.1. Experimental results for benchmark problems. 6.2. Statistical verification of the optimal epoch time. 6.3. Discussion -- 7. Pattern allocation schemes using genetic algorithm. 7.1. Optimization algorithm and computational complexity. 7.2. Solution time for optimal pattern. 7.3. Sub-optimal method: Heuristic distribution. 7.4. Genetic algorithm for pattern allocation. 7.5. Comparison between genetic algorithm and MIP. 7.6. Inclusion of 'A Priori' information. 7.7. G.A. with the proposed stopping criterion versus MIP -- A. Comparison between pipelined ring topology and ring topology. A1. Theoretical optimal epoch time for pipelined ring topology. A.2. Theoretical optimal epoch time for ring topology. A.3. Comparison between pipelined ring topology and ring topology -- B.A sample parallel C program -- C. The branch and bound method for solving mixed integer programming problems. |
ctrlnum | (OCoLC)828423959 |
dewey-full | 006.3 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 006 - Special computer methods |
dewey-raw | 006.3 |
dewey-search | 006.3 |
dewey-sort | 16.3 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>06444cam a2200733 a 4500</leader><controlfield tag="001">ZDB-4-EBU-ocn828423959</controlfield><controlfield tag="003">OCoLC</controlfield><controlfield tag="005">20241004212047.0</controlfield><controlfield tag="006">m o d </controlfield><controlfield tag="007">cr cnu---unuuu</controlfield><controlfield tag="008">130225s1996 si ob 001 0 eng d</controlfield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">N$T</subfield><subfield code="b">eng</subfield><subfield code="e">pn</subfield><subfield code="c">N$T</subfield><subfield code="d">IDEBK</subfield><subfield code="d">E7B</subfield><subfield code="d">I9W</subfield><subfield code="d">OCLCF</subfield><subfield code="d">YDXCP</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">AGLDB</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">STF</subfield><subfield code="d">UKAHL</subfield><subfield code="d">LEAUB</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">OCLCO</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">OCLCO</subfield><subfield code="d">SXB</subfield><subfield code="d">OCLCQ</subfield></datafield><datafield tag="019" ind1=" " ind2=" "><subfield code="a">1086410831</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9789812814968</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9812814965</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">9810226543</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">9789810226541</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)828423959</subfield><subfield code="z">(OCoLC)1086410831</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">QA76.58</subfield><subfield code="b">.P3773 1996eb</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">COM</subfield><subfield code="x">005030</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">COM</subfield><subfield code="x">004000</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="082" ind1="7" ind2=" "><subfield code="a">006.3</subfield><subfield code="2">22</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 300</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">DAT 717d</subfield><subfield code="2">stub</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">DAT 217d</subfield><subfield code="2">stub</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">MAIN</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Saratchandran, P.</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Parallel implementations of backpropagation neural networks on transputers :</subfield><subfield code="b">a study of training set parallelism /</subfield><subfield code="c">editors, P. Saratchandran, N. Sundararajan, Shou King Foo.</subfield></datafield><datafield tag="260" ind1=" " ind2=" "><subfield code="a">Singapore ;</subfield><subfield code="a">River Edge, NJ :</subfield><subfield code="b">World Scientific,</subfield><subfield code="c">©1996.</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (xviii, 202 pages)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">Progress in neural processing ;</subfield><subfield code="v">3</subfield></datafield><datafield tag="504" ind1=" " ind2=" "><subfield code="a">Includes bibliographical references (pages 189-199) and index.</subfield></datafield><datafield tag="588" ind1="0" ind2=" "><subfield code="a">Print version record.</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">1. Introduction. 1.1. Multilayer feedforward neural networks. 1.2. The basic BP algorithm. 1.3. Parallelism in the BP algorithm. 1.4. Some parallel implementations -- 2. Transputer topologies for parallel implementation. 2.1. The transputer. 2.2. Topologies. 2.3. Topology chosen in this study. 2.4 .Software used. 2.5. Performance metrics and benchmark problems -- 3. Development of a theoretical model for training set parallelism in a homogeneous array of transputers. 3.1. Time components of parallel transputer implementation. 3.2. Timing aspects of parallelizing the backpropagation algorithm. 3.3. Time components for the parallelized backpropagation algorithm. 3.4. Validation of the Tepoch model -- 4. Equal distribution of patterns amongst a homogeneous array of transputers. 4.1. Analytical model for time per epoch. 4.2. Validation of the model for equal distribution. 4.3. Optimal number of transputers needed for the case of equal distribution. 4.4. Cost benefits analysis of adding additional processors -- 5. Optimization model for unequal distribution of patterns in a homogeneous array of transputers. 5.1. Constraints for optimization. 5.2. Optimal pattern distribution. 5.3. Validation of the pattern optimization model. 5.4. Experimental results for benchmark problems. 5.5. Locating surplus processors and to find out the optimal number of processors needed to obtain minimum time per epoch -- 6. Optimization model for unequal distribution of patterns in a heterogeneous array of transputers. 6.1. Experimental results for benchmark problems. 6.2. Statistical verification of the optimal epoch time. 6.3. Discussion -- 7. Pattern allocation schemes using genetic algorithm. 7.1. Optimization algorithm and computational complexity. 7.2. Solution time for optimal pattern. 7.3. Sub-optimal method: Heuristic distribution. 7.4. Genetic algorithm for pattern allocation. 7.5. Comparison between genetic algorithm and MIP. 7.6. Inclusion of 'A Priori' information. 7.7. G.A. with the proposed stopping criterion versus MIP -- A. Comparison between pipelined ring topology and ring topology. A1. Theoretical optimal epoch time for pipelined ring topology. A.2. Theoretical optimal epoch time for ring topology. A.3. Comparison between pipelined ring topology and ring topology -- B.A sample parallel C program -- C. The branch and bound method for solving mixed integer programming problems.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This book presents a systematic approach to parallel implementation of feedforward neural networks on an array of transputers. The emphasis is on backpropagation learning and training set parallelism. Using systematic analysis, a theoretical model has been developed for the parallel implementation. The model is used to find the optimal mapping to minimize the training time for large backpropagation neural networks. The model has been validated experimentally on several well known benchmark problems. Use of genetic algorithms for optimizing the performance of the parallel implementations is described. Guidelines for efficient parallel implementations are highlighted.</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Parallel processing (Electronic computers)</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh85097826</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Neural networks (Computer science)</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh90001937</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Back propagation (Artificial intelligence)</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh94008320</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Transputers.</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh89002015</subfield></datafield><datafield tag="650" ind1=" " ind2="2"><subfield code="a">Neural Networks, Computer</subfield><subfield code="0">https://id.nlm.nih.gov/mesh/D016571</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Parallélisme (Informatique)</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Réseaux neuronaux (Informatique)</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Rétropropagation (Intelligence artificielle)</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Transputers.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS</subfield><subfield code="x">Enterprise Applications</subfield><subfield code="x">Business Intelligence Tools.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS</subfield><subfield code="x">Intelligence (AI) & Semantics.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Back propagation (Artificial intelligence)</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Neural networks (Computer science)</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Parallel processing (Electronic computers)</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Transputers</subfield><subfield code="2">fast</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Sundararajan, N.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Foo, Shou King.</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Print version:</subfield><subfield code="a">Saratchandran, P.</subfield><subfield code="t">Parallel implementations of backpropagation neural networks on transputers.</subfield><subfield code="d">Singapore ; River Edge, NJ : World Scientific, ©1996</subfield><subfield code="z">9810226543</subfield><subfield code="w">(DLC) 96012119</subfield><subfield code="w">(OCoLC)34477043</subfield></datafield><datafield tag="830" ind1=" " ind2="0"><subfield code="a">Progress in neural processing ;</subfield><subfield code="v">3.</subfield><subfield code="0">http://id.loc.gov/authorities/names/no95053043</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="l">FWS01</subfield><subfield code="p">ZDB-4-EBU</subfield><subfield code="q">FWS_PDA_EBU</subfield><subfield code="u">https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=532589</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">Askews and Holts Library Services</subfield><subfield code="b">ASKH</subfield><subfield code="n">AH24685798</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">ebrary</subfield><subfield code="b">EBRY</subfield><subfield code="n">ebr10701203</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">EBSCOhost</subfield><subfield code="b">EBSC</subfield><subfield code="n">532589</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">ProQuest MyiLibrary Digital eBook Collection</subfield><subfield code="b">IDEB</subfield><subfield code="n">cis25679527</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">YBP Library Services</subfield><subfield code="b">YANK</subfield><subfield code="n">10252925</subfield></datafield><datafield tag="994" ind1=" " ind2=" "><subfield code="a">92</subfield><subfield code="b">GEBAY</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-4-EBU</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-863</subfield></datafield></record></collection> |
id | ZDB-4-EBU-ocn828423959 |
illustrated | Not Illustrated |
indexdate | 2024-11-26T14:49:08Z |
institution | BVB |
isbn | 9789812814968 9812814965 |
language | English |
oclc_num | 828423959 |
open_access_boolean | |
owner | MAIN DE-863 DE-BY-FWS |
owner_facet | MAIN DE-863 DE-BY-FWS |
physical | 1 online resource (xviii, 202 pages) |
psigel | ZDB-4-EBU |
publishDate | 1996 |
publishDateSearch | 1996 |
publishDateSort | 1996 |
publisher | World Scientific, |
record_format | marc |
series | Progress in neural processing ; |
series2 | Progress in neural processing ; |
spelling | Saratchandran, P. Parallel implementations of backpropagation neural networks on transputers : a study of training set parallelism / editors, P. Saratchandran, N. Sundararajan, Shou King Foo. Singapore ; River Edge, NJ : World Scientific, ©1996. 1 online resource (xviii, 202 pages) text txt rdacontent computer c rdamedia online resource cr rdacarrier Progress in neural processing ; 3 Includes bibliographical references (pages 189-199) and index. Print version record. 1. Introduction. 1.1. Multilayer feedforward neural networks. 1.2. The basic BP algorithm. 1.3. Parallelism in the BP algorithm. 1.4. Some parallel implementations -- 2. Transputer topologies for parallel implementation. 2.1. The transputer. 2.2. Topologies. 2.3. Topology chosen in this study. 2.4 .Software used. 2.5. Performance metrics and benchmark problems -- 3. Development of a theoretical model for training set parallelism in a homogeneous array of transputers. 3.1. Time components of parallel transputer implementation. 3.2. Timing aspects of parallelizing the backpropagation algorithm. 3.3. Time components for the parallelized backpropagation algorithm. 3.4. Validation of the Tepoch model -- 4. Equal distribution of patterns amongst a homogeneous array of transputers. 4.1. Analytical model for time per epoch. 4.2. Validation of the model for equal distribution. 4.3. Optimal number of transputers needed for the case of equal distribution. 4.4. Cost benefits analysis of adding additional processors -- 5. Optimization model for unequal distribution of patterns in a homogeneous array of transputers. 5.1. Constraints for optimization. 5.2. Optimal pattern distribution. 5.3. Validation of the pattern optimization model. 5.4. Experimental results for benchmark problems. 5.5. Locating surplus processors and to find out the optimal number of processors needed to obtain minimum time per epoch -- 6. Optimization model for unequal distribution of patterns in a heterogeneous array of transputers. 6.1. Experimental results for benchmark problems. 6.2. Statistical verification of the optimal epoch time. 6.3. Discussion -- 7. Pattern allocation schemes using genetic algorithm. 7.1. Optimization algorithm and computational complexity. 7.2. Solution time for optimal pattern. 7.3. Sub-optimal method: Heuristic distribution. 7.4. Genetic algorithm for pattern allocation. 7.5. Comparison between genetic algorithm and MIP. 7.6. Inclusion of 'A Priori' information. 7.7. G.A. with the proposed stopping criterion versus MIP -- A. Comparison between pipelined ring topology and ring topology. A1. Theoretical optimal epoch time for pipelined ring topology. A.2. Theoretical optimal epoch time for ring topology. A.3. Comparison between pipelined ring topology and ring topology -- B.A sample parallel C program -- C. The branch and bound method for solving mixed integer programming problems. This book presents a systematic approach to parallel implementation of feedforward neural networks on an array of transputers. The emphasis is on backpropagation learning and training set parallelism. Using systematic analysis, a theoretical model has been developed for the parallel implementation. The model is used to find the optimal mapping to minimize the training time for large backpropagation neural networks. The model has been validated experimentally on several well known benchmark problems. Use of genetic algorithms for optimizing the performance of the parallel implementations is described. Guidelines for efficient parallel implementations are highlighted. Parallel processing (Electronic computers) http://id.loc.gov/authorities/subjects/sh85097826 Neural networks (Computer science) http://id.loc.gov/authorities/subjects/sh90001937 Back propagation (Artificial intelligence) http://id.loc.gov/authorities/subjects/sh94008320 Transputers. http://id.loc.gov/authorities/subjects/sh89002015 Neural Networks, Computer https://id.nlm.nih.gov/mesh/D016571 Parallélisme (Informatique) Réseaux neuronaux (Informatique) Rétropropagation (Intelligence artificielle) Transputers. COMPUTERS Enterprise Applications Business Intelligence Tools. bisacsh COMPUTERS Intelligence (AI) & Semantics. bisacsh Back propagation (Artificial intelligence) fast Neural networks (Computer science) fast Parallel processing (Electronic computers) fast Transputers fast Sundararajan, N. Foo, Shou King. Print version: Saratchandran, P. Parallel implementations of backpropagation neural networks on transputers. Singapore ; River Edge, NJ : World Scientific, ©1996 9810226543 (DLC) 96012119 (OCoLC)34477043 Progress in neural processing ; 3. http://id.loc.gov/authorities/names/no95053043 FWS01 ZDB-4-EBU FWS_PDA_EBU https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=532589 Volltext |
spellingShingle | Saratchandran, P. Parallel implementations of backpropagation neural networks on transputers : a study of training set parallelism / Progress in neural processing ; 1. Introduction. 1.1. Multilayer feedforward neural networks. 1.2. The basic BP algorithm. 1.3. Parallelism in the BP algorithm. 1.4. Some parallel implementations -- 2. Transputer topologies for parallel implementation. 2.1. The transputer. 2.2. Topologies. 2.3. Topology chosen in this study. 2.4 .Software used. 2.5. Performance metrics and benchmark problems -- 3. Development of a theoretical model for training set parallelism in a homogeneous array of transputers. 3.1. Time components of parallel transputer implementation. 3.2. Timing aspects of parallelizing the backpropagation algorithm. 3.3. Time components for the parallelized backpropagation algorithm. 3.4. Validation of the Tepoch model -- 4. Equal distribution of patterns amongst a homogeneous array of transputers. 4.1. Analytical model for time per epoch. 4.2. Validation of the model for equal distribution. 4.3. Optimal number of transputers needed for the case of equal distribution. 4.4. Cost benefits analysis of adding additional processors -- 5. Optimization model for unequal distribution of patterns in a homogeneous array of transputers. 5.1. Constraints for optimization. 5.2. Optimal pattern distribution. 5.3. Validation of the pattern optimization model. 5.4. Experimental results for benchmark problems. 5.5. Locating surplus processors and to find out the optimal number of processors needed to obtain minimum time per epoch -- 6. Optimization model for unequal distribution of patterns in a heterogeneous array of transputers. 6.1. Experimental results for benchmark problems. 6.2. Statistical verification of the optimal epoch time. 6.3. Discussion -- 7. Pattern allocation schemes using genetic algorithm. 7.1. Optimization algorithm and computational complexity. 7.2. Solution time for optimal pattern. 7.3. Sub-optimal method: Heuristic distribution. 7.4. Genetic algorithm for pattern allocation. 7.5. Comparison between genetic algorithm and MIP. 7.6. Inclusion of 'A Priori' information. 7.7. G.A. with the proposed stopping criterion versus MIP -- A. Comparison between pipelined ring topology and ring topology. A1. Theoretical optimal epoch time for pipelined ring topology. A.2. Theoretical optimal epoch time for ring topology. A.3. Comparison between pipelined ring topology and ring topology -- B.A sample parallel C program -- C. The branch and bound method for solving mixed integer programming problems. Parallel processing (Electronic computers) http://id.loc.gov/authorities/subjects/sh85097826 Neural networks (Computer science) http://id.loc.gov/authorities/subjects/sh90001937 Back propagation (Artificial intelligence) http://id.loc.gov/authorities/subjects/sh94008320 Transputers. http://id.loc.gov/authorities/subjects/sh89002015 Neural Networks, Computer https://id.nlm.nih.gov/mesh/D016571 Parallélisme (Informatique) Réseaux neuronaux (Informatique) Rétropropagation (Intelligence artificielle) Transputers. COMPUTERS Enterprise Applications Business Intelligence Tools. bisacsh COMPUTERS Intelligence (AI) & Semantics. bisacsh Back propagation (Artificial intelligence) fast Neural networks (Computer science) fast Parallel processing (Electronic computers) fast Transputers fast |
subject_GND | http://id.loc.gov/authorities/subjects/sh85097826 http://id.loc.gov/authorities/subjects/sh90001937 http://id.loc.gov/authorities/subjects/sh94008320 http://id.loc.gov/authorities/subjects/sh89002015 https://id.nlm.nih.gov/mesh/D016571 |
title | Parallel implementations of backpropagation neural networks on transputers : a study of training set parallelism / |
title_auth | Parallel implementations of backpropagation neural networks on transputers : a study of training set parallelism / |
title_exact_search | Parallel implementations of backpropagation neural networks on transputers : a study of training set parallelism / |
title_full | Parallel implementations of backpropagation neural networks on transputers : a study of training set parallelism / editors, P. Saratchandran, N. Sundararajan, Shou King Foo. |
title_fullStr | Parallel implementations of backpropagation neural networks on transputers : a study of training set parallelism / editors, P. Saratchandran, N. Sundararajan, Shou King Foo. |
title_full_unstemmed | Parallel implementations of backpropagation neural networks on transputers : a study of training set parallelism / editors, P. Saratchandran, N. Sundararajan, Shou King Foo. |
title_short | Parallel implementations of backpropagation neural networks on transputers : |
title_sort | parallel implementations of backpropagation neural networks on transputers a study of training set parallelism |
title_sub | a study of training set parallelism / |
topic | Parallel processing (Electronic computers) http://id.loc.gov/authorities/subjects/sh85097826 Neural networks (Computer science) http://id.loc.gov/authorities/subjects/sh90001937 Back propagation (Artificial intelligence) http://id.loc.gov/authorities/subjects/sh94008320 Transputers. http://id.loc.gov/authorities/subjects/sh89002015 Neural Networks, Computer https://id.nlm.nih.gov/mesh/D016571 Parallélisme (Informatique) Réseaux neuronaux (Informatique) Rétropropagation (Intelligence artificielle) Transputers. COMPUTERS Enterprise Applications Business Intelligence Tools. bisacsh COMPUTERS Intelligence (AI) & Semantics. bisacsh Back propagation (Artificial intelligence) fast Neural networks (Computer science) fast Parallel processing (Electronic computers) fast Transputers fast |
topic_facet | Parallel processing (Electronic computers) Neural networks (Computer science) Back propagation (Artificial intelligence) Transputers. Neural Networks, Computer Parallélisme (Informatique) Réseaux neuronaux (Informatique) Rétropropagation (Intelligence artificielle) COMPUTERS Enterprise Applications Business Intelligence Tools. COMPUTERS Intelligence (AI) & Semantics. Transputers |
url | https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=532589 |
work_keys_str_mv | AT saratchandranp parallelimplementationsofbackpropagationneuralnetworksontransputersastudyoftrainingsetparallelism AT sundararajann parallelimplementationsofbackpropagationneuralnetworksontransputersastudyoftrainingsetparallelism AT fooshouking parallelimplementationsofbackpropagationneuralnetworksontransputersastudyoftrainingsetparallelism |