Analogue imprecision in MLP training /:
Hardware inaccuracy and imprecision are important considerations when implementing neural algorithms. This book presents a study of synaptic weight noise as a typical fault model for analogue VLSI realisations of MLP neural networks and examines the implications for learning and network performance....
Gespeichert in:
1. Verfasser: | |
---|---|
Weitere Verfasser: | |
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Singapore ; River Edge, NJ :
World Scientific,
©1996.
|
Schriftenreihe: | Progress in neural processing ;
4. |
Schlagworte: | |
Online-Zugang: | Volltext |
Zusammenfassung: | Hardware inaccuracy and imprecision are important considerations when implementing neural algorithms. This book presents a study of synaptic weight noise as a typical fault model for analogue VLSI realisations of MLP neural networks and examines the implications for learning and network performance. The aim of the book is to present a study of how including an imprecision model into a learning scheme as a "fault tolerance hint" can aid understanding of accuracy and precision requirements for a particular implementation. In addition the study shows how such a scheme can give rise to significant performance enhancement. |
Beschreibung: | 1 online resource (xi, 178 pages) : illustrations |
Bibliographie: | Includes bibliographical references (pages 165-172) and index. |
ISBN: | 9789812830012 9812830014 |
Internformat
MARC
LEADER | 00000cam a2200000 a 4500 | ||
---|---|---|---|
001 | ZDB-4-EBU-ocn846496133 | ||
003 | OCoLC | ||
005 | 20241004212047.0 | ||
006 | m o d | ||
007 | cr cnu---unuuu | ||
008 | 130603s1996 si a ob 001 0 eng d | ||
040 | |a N$T |b eng |e pn |c N$T |d E7B |d IDEBK |d OCLCF |d OCLCQ |d YDXCP |d OCLCQ |d AGLDB |d OCLCQ |d STF |d AU@ |d LEAUB |d OCLCQ |d OCLCO |d OCLCQ |d OCLCO |d OCLCL |d SXB |d OCLCQ | ||
019 | |a 1086445377 | ||
020 | |a 9789812830012 |q (electronic bk.) | ||
020 | |a 9812830014 |q (electronic bk.) | ||
020 | |z 9810227396 | ||
020 | |z 9789810227395 | ||
035 | |a (OCoLC)846496133 |z (OCoLC)1086445377 | ||
050 | 4 | |a QA76.87 |b .E28 1996eb | |
072 | 7 | |a COM |x 005030 |2 bisacsh | |
072 | 7 | |a COM |x 004000 |2 bisacsh | |
082 | 7 | |a 006.3 |2 22 | |
049 | |a MAIN | ||
100 | 1 | |a Edwards, Peter J. |q (Peter John) |1 https://id.oclc.org/worldcat/entity/E39PCjBvTF8wpjGHP7jpvqqcMX |0 http://id.loc.gov/authorities/names/n96044908 | |
245 | 1 | 0 | |a Analogue imprecision in MLP training / |c Peter J. Edwards, Alan F. Murray. |
260 | |a Singapore ; |a River Edge, NJ : |b World Scientific, |c ©1996. | ||
300 | |a 1 online resource (xi, 178 pages) : |b illustrations | ||
336 | |a text |b txt |2 rdacontent | ||
337 | |a computer |b c |2 rdamedia | ||
338 | |a online resource |b cr |2 rdacarrier | ||
490 | 1 | |a Progress in neural processing ; |v 4 | |
504 | |a Includes bibliographical references (pages 165-172) and index. | ||
588 | 0 | |a Print version record. | |
505 | 0 | |a 1. Introduction. 1.1. Multi-layer perceptrons. 1.2. Stochastic systems. 1.3. Chapter summary -- 2. Neural network performance metrics. 2.1. Introduction. 2.2. Fault tolerance. 2.3. Generalisation. 2.4. Learning trajectory and speed. 2.5. Chapter summary -- 3. Noise in neural implementations. 3.1. Introduction. 3.2. Implementation errors. 3.3. An implementation error model. 3.4. The mathematical model. 3.5. Chapter summary -- 4. Simulation requirements and environment. 4.1. Introduction. 4.2. Simulation requirements. 4.3. Simulation environment. 4.4. Chapter summary -- 5. Fault tolerance. 5.1. Introduction. 5.2. Test environment. 5.3. Results. 5.4. Chapter summary -- 6. Generalisation ability. 6.1. Introduction. 6.2. Test environment. 6.3. Results. 6.4. Chapter summary -- 7. Learning trajectory and speed. 7.1. Introduction. 7.2. Test aims and method. 7.3. Results. 7.4. Chapter summary -- 8. Penalty terms for fault tolerance. 8.1. Introduction. 8.2. Definition of the penalty terms. 8.3. Learning algorithms. 8.4. Practical issues. 8.5. Chapter summary -- 9. Conclusions. 9.1. Introduction. 9.2. Implications for analogue hardware. 9.3. Synaptic noise as an enhancement scheme. 9.4. Learning hints. 9.5. General conclusions. | |
520 | |a Hardware inaccuracy and imprecision are important considerations when implementing neural algorithms. This book presents a study of synaptic weight noise as a typical fault model for analogue VLSI realisations of MLP neural networks and examines the implications for learning and network performance. The aim of the book is to present a study of how including an imprecision model into a learning scheme as a "fault tolerance hint" can aid understanding of accuracy and precision requirements for a particular implementation. In addition the study shows how such a scheme can give rise to significant performance enhancement. | ||
650 | 0 | |a Neural networks (Computer science) |0 http://id.loc.gov/authorities/subjects/sh90001937 | |
650 | 0 | |a Perceptrons. |0 http://id.loc.gov/authorities/subjects/sh85099714 | |
650 | 6 | |a Réseaux neuronaux (Informatique) | |
650 | 6 | |a Perceptrons. | |
650 | 7 | |a COMPUTERS |x Enterprise Applications |x Business Intelligence Tools. |2 bisacsh | |
650 | 7 | |a COMPUTERS |x Intelligence (AI) & Semantics. |2 bisacsh | |
650 | 7 | |a Neural networks (Computer science) |2 fast | |
650 | 7 | |a Perceptrons |2 fast | |
700 | 1 | |a Murray, Alan F. | |
776 | 0 | 8 | |i Print version: |a Edwards, Peter J. (Peter John). |t Analogue imprecision in MLP training. |d Singapore ; River Edge, NJ : World Scientific, ©1996 |z 9810227396 |w (DLC) 96008889 |w (OCoLC)34710796 |
830 | 0 | |a Progress in neural processing ; |v 4. |0 http://id.loc.gov/authorities/names/no95053043 | |
856 | 4 | 0 | |l FWS01 |p ZDB-4-EBU |q FWS_PDA_EBU |u https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=564397 |3 Volltext |
938 | |a ebrary |b EBRY |n ebr10692007 | ||
938 | |a EBSCOhost |b EBSC |n 564397 | ||
938 | |a ProQuest MyiLibrary Digital eBook Collection |b IDEB |n cis25679697 | ||
938 | |a YBP Library Services |b YANK |n 10693694 | ||
994 | |a 92 |b GEBAY | ||
912 | |a ZDB-4-EBU | ||
049 | |a DE-863 |
Datensatz im Suchindex
DE-BY-FWS_katkey | ZDB-4-EBU-ocn846496133 |
---|---|
_version_ | 1816796909185007616 |
adam_text | |
any_adam_object | |
author | Edwards, Peter J. (Peter John) |
author2 | Murray, Alan F. |
author2_role | |
author2_variant | a f m af afm |
author_GND | http://id.loc.gov/authorities/names/n96044908 |
author_facet | Edwards, Peter J. (Peter John) Murray, Alan F. |
author_role | |
author_sort | Edwards, Peter J. |
author_variant | p j e pj pje |
building | Verbundindex |
bvnumber | localFWS |
callnumber-first | Q - Science |
callnumber-label | QA76 |
callnumber-raw | QA76.87 .E28 1996eb |
callnumber-search | QA76.87 .E28 1996eb |
callnumber-sort | QA 276.87 E28 41996EB |
callnumber-subject | QA - Mathematics |
collection | ZDB-4-EBU |
contents | 1. Introduction. 1.1. Multi-layer perceptrons. 1.2. Stochastic systems. 1.3. Chapter summary -- 2. Neural network performance metrics. 2.1. Introduction. 2.2. Fault tolerance. 2.3. Generalisation. 2.4. Learning trajectory and speed. 2.5. Chapter summary -- 3. Noise in neural implementations. 3.1. Introduction. 3.2. Implementation errors. 3.3. An implementation error model. 3.4. The mathematical model. 3.5. Chapter summary -- 4. Simulation requirements and environment. 4.1. Introduction. 4.2. Simulation requirements. 4.3. Simulation environment. 4.4. Chapter summary -- 5. Fault tolerance. 5.1. Introduction. 5.2. Test environment. 5.3. Results. 5.4. Chapter summary -- 6. Generalisation ability. 6.1. Introduction. 6.2. Test environment. 6.3. Results. 6.4. Chapter summary -- 7. Learning trajectory and speed. 7.1. Introduction. 7.2. Test aims and method. 7.3. Results. 7.4. Chapter summary -- 8. Penalty terms for fault tolerance. 8.1. Introduction. 8.2. Definition of the penalty terms. 8.3. Learning algorithms. 8.4. Practical issues. 8.5. Chapter summary -- 9. Conclusions. 9.1. Introduction. 9.2. Implications for analogue hardware. 9.3. Synaptic noise as an enhancement scheme. 9.4. Learning hints. 9.5. General conclusions. |
ctrlnum | (OCoLC)846496133 |
dewey-full | 006.3 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 006 - Special computer methods |
dewey-raw | 006.3 |
dewey-search | 006.3 |
dewey-sort | 16.3 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>04511cam a2200589 a 4500</leader><controlfield tag="001">ZDB-4-EBU-ocn846496133</controlfield><controlfield tag="003">OCoLC</controlfield><controlfield tag="005">20241004212047.0</controlfield><controlfield tag="006">m o d </controlfield><controlfield tag="007">cr cnu---unuuu</controlfield><controlfield tag="008">130603s1996 si a ob 001 0 eng d</controlfield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">N$T</subfield><subfield code="b">eng</subfield><subfield code="e">pn</subfield><subfield code="c">N$T</subfield><subfield code="d">E7B</subfield><subfield code="d">IDEBK</subfield><subfield code="d">OCLCF</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">YDXCP</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">AGLDB</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">STF</subfield><subfield code="d">AU@</subfield><subfield code="d">LEAUB</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">OCLCO</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">OCLCO</subfield><subfield code="d">OCLCL</subfield><subfield code="d">SXB</subfield><subfield code="d">OCLCQ</subfield></datafield><datafield tag="019" ind1=" " ind2=" "><subfield code="a">1086445377</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9789812830012</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9812830014</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">9810227396</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">9789810227395</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)846496133</subfield><subfield code="z">(OCoLC)1086445377</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">QA76.87</subfield><subfield code="b">.E28 1996eb</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">COM</subfield><subfield code="x">005030</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">COM</subfield><subfield code="x">004000</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="082" ind1="7" ind2=" "><subfield code="a">006.3</subfield><subfield code="2">22</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">MAIN</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Edwards, Peter J.</subfield><subfield code="q">(Peter John)</subfield><subfield code="1">https://id.oclc.org/worldcat/entity/E39PCjBvTF8wpjGHP7jpvqqcMX</subfield><subfield code="0">http://id.loc.gov/authorities/names/n96044908</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Analogue imprecision in MLP training /</subfield><subfield code="c">Peter J. Edwards, Alan F. Murray.</subfield></datafield><datafield tag="260" ind1=" " ind2=" "><subfield code="a">Singapore ;</subfield><subfield code="a">River Edge, NJ :</subfield><subfield code="b">World Scientific,</subfield><subfield code="c">©1996.</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (xi, 178 pages) :</subfield><subfield code="b">illustrations</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">Progress in neural processing ;</subfield><subfield code="v">4</subfield></datafield><datafield tag="504" ind1=" " ind2=" "><subfield code="a">Includes bibliographical references (pages 165-172) and index.</subfield></datafield><datafield tag="588" ind1="0" ind2=" "><subfield code="a">Print version record.</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">1. Introduction. 1.1. Multi-layer perceptrons. 1.2. Stochastic systems. 1.3. Chapter summary -- 2. Neural network performance metrics. 2.1. Introduction. 2.2. Fault tolerance. 2.3. Generalisation. 2.4. Learning trajectory and speed. 2.5. Chapter summary -- 3. Noise in neural implementations. 3.1. Introduction. 3.2. Implementation errors. 3.3. An implementation error model. 3.4. The mathematical model. 3.5. Chapter summary -- 4. Simulation requirements and environment. 4.1. Introduction. 4.2. Simulation requirements. 4.3. Simulation environment. 4.4. Chapter summary -- 5. Fault tolerance. 5.1. Introduction. 5.2. Test environment. 5.3. Results. 5.4. Chapter summary -- 6. Generalisation ability. 6.1. Introduction. 6.2. Test environment. 6.3. Results. 6.4. Chapter summary -- 7. Learning trajectory and speed. 7.1. Introduction. 7.2. Test aims and method. 7.3. Results. 7.4. Chapter summary -- 8. Penalty terms for fault tolerance. 8.1. Introduction. 8.2. Definition of the penalty terms. 8.3. Learning algorithms. 8.4. Practical issues. 8.5. Chapter summary -- 9. Conclusions. 9.1. Introduction. 9.2. Implications for analogue hardware. 9.3. Synaptic noise as an enhancement scheme. 9.4. Learning hints. 9.5. General conclusions.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Hardware inaccuracy and imprecision are important considerations when implementing neural algorithms. This book presents a study of synaptic weight noise as a typical fault model for analogue VLSI realisations of MLP neural networks and examines the implications for learning and network performance. The aim of the book is to present a study of how including an imprecision model into a learning scheme as a "fault tolerance hint" can aid understanding of accuracy and precision requirements for a particular implementation. In addition the study shows how such a scheme can give rise to significant performance enhancement.</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Neural networks (Computer science)</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh90001937</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Perceptrons.</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh85099714</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Réseaux neuronaux (Informatique)</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Perceptrons.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS</subfield><subfield code="x">Enterprise Applications</subfield><subfield code="x">Business Intelligence Tools.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS</subfield><subfield code="x">Intelligence (AI) & Semantics.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Neural networks (Computer science)</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Perceptrons</subfield><subfield code="2">fast</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Murray, Alan F.</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Print version:</subfield><subfield code="a">Edwards, Peter J. (Peter John).</subfield><subfield code="t">Analogue imprecision in MLP training.</subfield><subfield code="d">Singapore ; River Edge, NJ : World Scientific, ©1996</subfield><subfield code="z">9810227396</subfield><subfield code="w">(DLC) 96008889</subfield><subfield code="w">(OCoLC)34710796</subfield></datafield><datafield tag="830" ind1=" " ind2="0"><subfield code="a">Progress in neural processing ;</subfield><subfield code="v">4.</subfield><subfield code="0">http://id.loc.gov/authorities/names/no95053043</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="l">FWS01</subfield><subfield code="p">ZDB-4-EBU</subfield><subfield code="q">FWS_PDA_EBU</subfield><subfield code="u">https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=564397</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">ebrary</subfield><subfield code="b">EBRY</subfield><subfield code="n">ebr10692007</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">EBSCOhost</subfield><subfield code="b">EBSC</subfield><subfield code="n">564397</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">ProQuest MyiLibrary Digital eBook Collection</subfield><subfield code="b">IDEB</subfield><subfield code="n">cis25679697</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">YBP Library Services</subfield><subfield code="b">YANK</subfield><subfield code="n">10693694</subfield></datafield><datafield tag="994" ind1=" " ind2=" "><subfield code="a">92</subfield><subfield code="b">GEBAY</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-4-EBU</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-863</subfield></datafield></record></collection> |
id | ZDB-4-EBU-ocn846496133 |
illustrated | Illustrated |
indexdate | 2024-11-26T14:49:10Z |
institution | BVB |
isbn | 9789812830012 9812830014 |
language | English |
oclc_num | 846496133 |
open_access_boolean | |
owner | MAIN DE-863 DE-BY-FWS |
owner_facet | MAIN DE-863 DE-BY-FWS |
physical | 1 online resource (xi, 178 pages) : illustrations |
psigel | ZDB-4-EBU |
publishDate | 1996 |
publishDateSearch | 1996 |
publishDateSort | 1996 |
publisher | World Scientific, |
record_format | marc |
series | Progress in neural processing ; |
series2 | Progress in neural processing ; |
spelling | Edwards, Peter J. (Peter John) https://id.oclc.org/worldcat/entity/E39PCjBvTF8wpjGHP7jpvqqcMX http://id.loc.gov/authorities/names/n96044908 Analogue imprecision in MLP training / Peter J. Edwards, Alan F. Murray. Singapore ; River Edge, NJ : World Scientific, ©1996. 1 online resource (xi, 178 pages) : illustrations text txt rdacontent computer c rdamedia online resource cr rdacarrier Progress in neural processing ; 4 Includes bibliographical references (pages 165-172) and index. Print version record. 1. Introduction. 1.1. Multi-layer perceptrons. 1.2. Stochastic systems. 1.3. Chapter summary -- 2. Neural network performance metrics. 2.1. Introduction. 2.2. Fault tolerance. 2.3. Generalisation. 2.4. Learning trajectory and speed. 2.5. Chapter summary -- 3. Noise in neural implementations. 3.1. Introduction. 3.2. Implementation errors. 3.3. An implementation error model. 3.4. The mathematical model. 3.5. Chapter summary -- 4. Simulation requirements and environment. 4.1. Introduction. 4.2. Simulation requirements. 4.3. Simulation environment. 4.4. Chapter summary -- 5. Fault tolerance. 5.1. Introduction. 5.2. Test environment. 5.3. Results. 5.4. Chapter summary -- 6. Generalisation ability. 6.1. Introduction. 6.2. Test environment. 6.3. Results. 6.4. Chapter summary -- 7. Learning trajectory and speed. 7.1. Introduction. 7.2. Test aims and method. 7.3. Results. 7.4. Chapter summary -- 8. Penalty terms for fault tolerance. 8.1. Introduction. 8.2. Definition of the penalty terms. 8.3. Learning algorithms. 8.4. Practical issues. 8.5. Chapter summary -- 9. Conclusions. 9.1. Introduction. 9.2. Implications for analogue hardware. 9.3. Synaptic noise as an enhancement scheme. 9.4. Learning hints. 9.5. General conclusions. Hardware inaccuracy and imprecision are important considerations when implementing neural algorithms. This book presents a study of synaptic weight noise as a typical fault model for analogue VLSI realisations of MLP neural networks and examines the implications for learning and network performance. The aim of the book is to present a study of how including an imprecision model into a learning scheme as a "fault tolerance hint" can aid understanding of accuracy and precision requirements for a particular implementation. In addition the study shows how such a scheme can give rise to significant performance enhancement. Neural networks (Computer science) http://id.loc.gov/authorities/subjects/sh90001937 Perceptrons. http://id.loc.gov/authorities/subjects/sh85099714 Réseaux neuronaux (Informatique) Perceptrons. COMPUTERS Enterprise Applications Business Intelligence Tools. bisacsh COMPUTERS Intelligence (AI) & Semantics. bisacsh Neural networks (Computer science) fast Perceptrons fast Murray, Alan F. Print version: Edwards, Peter J. (Peter John). Analogue imprecision in MLP training. Singapore ; River Edge, NJ : World Scientific, ©1996 9810227396 (DLC) 96008889 (OCoLC)34710796 Progress in neural processing ; 4. http://id.loc.gov/authorities/names/no95053043 FWS01 ZDB-4-EBU FWS_PDA_EBU https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=564397 Volltext |
spellingShingle | Edwards, Peter J. (Peter John) Analogue imprecision in MLP training / Progress in neural processing ; 1. Introduction. 1.1. Multi-layer perceptrons. 1.2. Stochastic systems. 1.3. Chapter summary -- 2. Neural network performance metrics. 2.1. Introduction. 2.2. Fault tolerance. 2.3. Generalisation. 2.4. Learning trajectory and speed. 2.5. Chapter summary -- 3. Noise in neural implementations. 3.1. Introduction. 3.2. Implementation errors. 3.3. An implementation error model. 3.4. The mathematical model. 3.5. Chapter summary -- 4. Simulation requirements and environment. 4.1. Introduction. 4.2. Simulation requirements. 4.3. Simulation environment. 4.4. Chapter summary -- 5. Fault tolerance. 5.1. Introduction. 5.2. Test environment. 5.3. Results. 5.4. Chapter summary -- 6. Generalisation ability. 6.1. Introduction. 6.2. Test environment. 6.3. Results. 6.4. Chapter summary -- 7. Learning trajectory and speed. 7.1. Introduction. 7.2. Test aims and method. 7.3. Results. 7.4. Chapter summary -- 8. Penalty terms for fault tolerance. 8.1. Introduction. 8.2. Definition of the penalty terms. 8.3. Learning algorithms. 8.4. Practical issues. 8.5. Chapter summary -- 9. Conclusions. 9.1. Introduction. 9.2. Implications for analogue hardware. 9.3. Synaptic noise as an enhancement scheme. 9.4. Learning hints. 9.5. General conclusions. Neural networks (Computer science) http://id.loc.gov/authorities/subjects/sh90001937 Perceptrons. http://id.loc.gov/authorities/subjects/sh85099714 Réseaux neuronaux (Informatique) Perceptrons. COMPUTERS Enterprise Applications Business Intelligence Tools. bisacsh COMPUTERS Intelligence (AI) & Semantics. bisacsh Neural networks (Computer science) fast Perceptrons fast |
subject_GND | http://id.loc.gov/authorities/subjects/sh90001937 http://id.loc.gov/authorities/subjects/sh85099714 |
title | Analogue imprecision in MLP training / |
title_auth | Analogue imprecision in MLP training / |
title_exact_search | Analogue imprecision in MLP training / |
title_full | Analogue imprecision in MLP training / Peter J. Edwards, Alan F. Murray. |
title_fullStr | Analogue imprecision in MLP training / Peter J. Edwards, Alan F. Murray. |
title_full_unstemmed | Analogue imprecision in MLP training / Peter J. Edwards, Alan F. Murray. |
title_short | Analogue imprecision in MLP training / |
title_sort | analogue imprecision in mlp training |
topic | Neural networks (Computer science) http://id.loc.gov/authorities/subjects/sh90001937 Perceptrons. http://id.loc.gov/authorities/subjects/sh85099714 Réseaux neuronaux (Informatique) Perceptrons. COMPUTERS Enterprise Applications Business Intelligence Tools. bisacsh COMPUTERS Intelligence (AI) & Semantics. bisacsh Neural networks (Computer science) fast Perceptrons fast |
topic_facet | Neural networks (Computer science) Perceptrons. Réseaux neuronaux (Informatique) COMPUTERS Enterprise Applications Business Intelligence Tools. COMPUTERS Intelligence (AI) & Semantics. Perceptrons |
url | https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=564397 |
work_keys_str_mv | AT edwardspeterj analogueimprecisioninmlptraining AT murrayalanf analogueimprecisioninmlptraining |