Parallel implementations of backpropagation neural networks on transputers: a study of training set parallelism
This book presents a systematic approach to parallel implementation of feedforward neural networks on an array of transputers. The emphasis is on backpropagation learning and training set parallelism. Using systematic analysis, a theoretical model has been developed for the parallel implementation....
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Singapore
World Scientific Pub. Co.
c1996
|
Schriftenreihe: | Progress in neural processing
3 |
Schlagworte: | |
Online-Zugang: | FHN01 Volltext |
Zusammenfassung: | This book presents a systematic approach to parallel implementation of feedforward neural networks on an array of transputers. The emphasis is on backpropagation learning and training set parallelism. Using systematic analysis, a theoretical model has been developed for the parallel implementation. The model is used to find the optimal mapping to minimize the training time for large backpropagation neural networks. The model has been validated experimentally on several well known benchmark problems. Use of genetic algorithms for optimizing the performance of the parallel implementations is described. Guidelines for efficient parallel implementations are highlighted |
Beschreibung: | xviii, 202 p. ill |
ISBN: | 9789812814968 |
Internformat
MARC
LEADER | 00000nmm a2200000zcb4500 | ||
---|---|---|---|
001 | BV044636440 | ||
003 | DE-604 | ||
005 | 00000000000000.0 | ||
007 | cr|uuu---uuuuu | ||
008 | 171120s1996 |||| o||u| ||||||eng d | ||
020 | |a 9789812814968 |9 978-981-281-496-8 | ||
024 | 7 | |a 10.1142/3094 |2 doi | |
035 | |a (ZDB-124-WOP)00004863 | ||
035 | |a (OCoLC)1012651196 | ||
035 | |a (DE-599)BVBBV044636440 | ||
040 | |a DE-604 |b ger |e aacr | ||
041 | 0 | |a eng | |
049 | |a DE-92 | ||
082 | 0 | |a 004.35 |2 22 | |
084 | |a ST 300 |0 (DE-625)143650: |2 rvk | ||
100 | 1 | |a Saratchandran, P. |e Verfasser |4 aut | |
245 | 1 | 0 | |a Parallel implementations of backpropagation neural networks on transputers |b a study of training set parallelism |c P. Saratchandran, N. Sundararajan, Shou King Foo |
264 | 1 | |a Singapore |b World Scientific Pub. Co. |c c1996 | |
300 | |a xviii, 202 p. |b ill | ||
336 | |b txt |2 rdacontent | ||
337 | |b c |2 rdamedia | ||
338 | |b cr |2 rdacarrier | ||
490 | 0 | |a Progress in neural processing |v 3 | |
520 | |a This book presents a systematic approach to parallel implementation of feedforward neural networks on an array of transputers. The emphasis is on backpropagation learning and training set parallelism. Using systematic analysis, a theoretical model has been developed for the parallel implementation. The model is used to find the optimal mapping to minimize the training time for large backpropagation neural networks. The model has been validated experimentally on several well known benchmark problems. Use of genetic algorithms for optimizing the performance of the parallel implementations is described. Guidelines for efficient parallel implementations are highlighted | ||
650 | 4 | |a Parallel processing (Electronic computers) | |
650 | 4 | |a Neural networks (Computer science) | |
650 | 4 | |a Back propagation (Artificial intelligence) | |
650 | 4 | |a Transputers | |
700 | 1 | |a Sundararajan, Narasimhan |e Sonstige |4 oth | |
700 | 1 | |a Foo, Shou King |e Sonstige |4 oth | |
776 | 0 | 8 | |i Erscheint auch als |n Druck-Ausgabe |z 9789810226541 |
776 | 0 | 8 | |i Erscheint auch als |n Druck-Ausgabe |z 9810226543 |
856 | 4 | 0 | |u http://www.worldscientific.com/worldscibooks/10.1142/3094#t=toc |x Verlag |z URL des Erstveroeffentlichers |3 Volltext |
912 | |a ZDB-124-WOP | ||
999 | |a oai:aleph.bib-bvb.de:BVB01-030034412 | ||
966 | e | |u http://www.worldscientific.com/worldscibooks/10.1142/3094#t=toc |l FHN01 |p ZDB-124-WOP |q FHN_PDA_WOP |x Verlag |3 Volltext |
Datensatz im Suchindex
_version_ | 1804178050787573760 |
---|---|
any_adam_object | |
author | Saratchandran, P. |
author_facet | Saratchandran, P. |
author_role | aut |
author_sort | Saratchandran, P. |
author_variant | p s ps |
building | Verbundindex |
bvnumber | BV044636440 |
classification_rvk | ST 300 |
collection | ZDB-124-WOP |
ctrlnum | (ZDB-124-WOP)00004863 (OCoLC)1012651196 (DE-599)BVBBV044636440 |
dewey-full | 004.35 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 004 - Computer science |
dewey-raw | 004.35 |
dewey-search | 004.35 |
dewey-sort | 14.35 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02440nmm a2200457zcb4500</leader><controlfield tag="001">BV044636440</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">00000000000000.0</controlfield><controlfield tag="007">cr|uuu---uuuuu</controlfield><controlfield tag="008">171120s1996 |||| o||u| ||||||eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9789812814968</subfield><subfield code="9">978-981-281-496-8</subfield></datafield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1142/3094</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-124-WOP)00004863</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1012651196</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV044636440</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">aacr</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-92</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">004.35</subfield><subfield code="2">22</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 300</subfield><subfield code="0">(DE-625)143650:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Saratchandran, P.</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Parallel implementations of backpropagation neural networks on transputers</subfield><subfield code="b">a study of training set parallelism</subfield><subfield code="c">P. Saratchandran, N. Sundararajan, Shou King Foo</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Singapore</subfield><subfield code="b">World Scientific Pub. Co.</subfield><subfield code="c">c1996</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xviii, 202 p.</subfield><subfield code="b">ill</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="0" ind2=" "><subfield code="a">Progress in neural processing</subfield><subfield code="v">3</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This book presents a systematic approach to parallel implementation of feedforward neural networks on an array of transputers. The emphasis is on backpropagation learning and training set parallelism. Using systematic analysis, a theoretical model has been developed for the parallel implementation. The model is used to find the optimal mapping to minimize the training time for large backpropagation neural networks. The model has been validated experimentally on several well known benchmark problems. Use of genetic algorithms for optimizing the performance of the parallel implementations is described. Guidelines for efficient parallel implementations are highlighted</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Parallel processing (Electronic computers)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Neural networks (Computer science)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Back propagation (Artificial intelligence)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Transputers</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Sundararajan, Narasimhan</subfield><subfield code="e">Sonstige</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Foo, Shou King</subfield><subfield code="e">Sonstige</subfield><subfield code="4">oth</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Druck-Ausgabe</subfield><subfield code="z">9789810226541</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Druck-Ausgabe</subfield><subfield code="z">9810226543</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">http://www.worldscientific.com/worldscibooks/10.1142/3094#t=toc</subfield><subfield code="x">Verlag</subfield><subfield code="z">URL des Erstveroeffentlichers</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-124-WOP</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-030034412</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">http://www.worldscientific.com/worldscibooks/10.1142/3094#t=toc</subfield><subfield code="l">FHN01</subfield><subfield code="p">ZDB-124-WOP</subfield><subfield code="q">FHN_PDA_WOP</subfield><subfield code="x">Verlag</subfield><subfield code="3">Volltext</subfield></datafield></record></collection> |
id | DE-604.BV044636440 |
illustrated | Not Illustrated |
indexdate | 2024-07-10T07:57:49Z |
institution | BVB |
isbn | 9789812814968 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-030034412 |
oclc_num | 1012651196 |
open_access_boolean | |
owner | DE-92 |
owner_facet | DE-92 |
physical | xviii, 202 p. ill |
psigel | ZDB-124-WOP ZDB-124-WOP FHN_PDA_WOP |
publishDate | 1996 |
publishDateSearch | 1996 |
publishDateSort | 1996 |
publisher | World Scientific Pub. Co. |
record_format | marc |
series2 | Progress in neural processing |
spelling | Saratchandran, P. Verfasser aut Parallel implementations of backpropagation neural networks on transputers a study of training set parallelism P. Saratchandran, N. Sundararajan, Shou King Foo Singapore World Scientific Pub. Co. c1996 xviii, 202 p. ill txt rdacontent c rdamedia cr rdacarrier Progress in neural processing 3 This book presents a systematic approach to parallel implementation of feedforward neural networks on an array of transputers. The emphasis is on backpropagation learning and training set parallelism. Using systematic analysis, a theoretical model has been developed for the parallel implementation. The model is used to find the optimal mapping to minimize the training time for large backpropagation neural networks. The model has been validated experimentally on several well known benchmark problems. Use of genetic algorithms for optimizing the performance of the parallel implementations is described. Guidelines for efficient parallel implementations are highlighted Parallel processing (Electronic computers) Neural networks (Computer science) Back propagation (Artificial intelligence) Transputers Sundararajan, Narasimhan Sonstige oth Foo, Shou King Sonstige oth Erscheint auch als Druck-Ausgabe 9789810226541 Erscheint auch als Druck-Ausgabe 9810226543 http://www.worldscientific.com/worldscibooks/10.1142/3094#t=toc Verlag URL des Erstveroeffentlichers Volltext |
spellingShingle | Saratchandran, P. Parallel implementations of backpropagation neural networks on transputers a study of training set parallelism Parallel processing (Electronic computers) Neural networks (Computer science) Back propagation (Artificial intelligence) Transputers |
title | Parallel implementations of backpropagation neural networks on transputers a study of training set parallelism |
title_auth | Parallel implementations of backpropagation neural networks on transputers a study of training set parallelism |
title_exact_search | Parallel implementations of backpropagation neural networks on transputers a study of training set parallelism |
title_full | Parallel implementations of backpropagation neural networks on transputers a study of training set parallelism P. Saratchandran, N. Sundararajan, Shou King Foo |
title_fullStr | Parallel implementations of backpropagation neural networks on transputers a study of training set parallelism P. Saratchandran, N. Sundararajan, Shou King Foo |
title_full_unstemmed | Parallel implementations of backpropagation neural networks on transputers a study of training set parallelism P. Saratchandran, N. Sundararajan, Shou King Foo |
title_short | Parallel implementations of backpropagation neural networks on transputers |
title_sort | parallel implementations of backpropagation neural networks on transputers a study of training set parallelism |
title_sub | a study of training set parallelism |
topic | Parallel processing (Electronic computers) Neural networks (Computer science) Back propagation (Artificial intelligence) Transputers |
topic_facet | Parallel processing (Electronic computers) Neural networks (Computer science) Back propagation (Artificial intelligence) Transputers |
url | http://www.worldscientific.com/worldscibooks/10.1142/3094#t=toc |
work_keys_str_mv | AT saratchandranp parallelimplementationsofbackpropagationneuralnetworksontransputersastudyoftrainingsetparallelism AT sundararajannarasimhan parallelimplementationsofbackpropagationneuralnetworksontransputersastudyoftrainingsetparallelism AT fooshouking parallelimplementationsofbackpropagationneuralnetworksontransputersastudyoftrainingsetparallelism |