Analytical modelling in parallel and distributed computing /:
Examines complex performance evaluation of various typical parallel algorithms (shared memory, distributed memory) and their practical implementations. Includes real application examples we demonstrate the various influences during the process of modelling and performanceevaluation and the consequen...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Oxford [England] :
Chartridge Books Oxford,
2014.
|
Schlagworte: | |
Online-Zugang: | Volltext |
Zusammenfassung: | Examines complex performance evaluation of various typical parallel algorithms (shared memory, distributed memory) and their practical implementations. Includes real application examples we demonstrate the various influences during the process of modelling and performanceevaluation and the consequences of their distributed parallel implementations. The current trends in High Performance Computing (HPC) are to use networks of workstations (NOW, SMP) or a network of NOW networks (Grid) as a cheaper alternative to the traditionally-used, massive parallel multiprocessors or supercomputers. Individual workstations could be single PCs (personal computers) used as parallelcomputers based on modern symmetric multicore or multiprocessor systems (SMPs) implemented inside the workstation. With the availability of powerful personal computers, workstations and networking devices, the latest trend in parallel computing is toconnect a number of individual workstations (PCs, PC SMPs) to solve computation-intensive tasks in a parallel way to typical clusters such as NOW, SMP and Grid. In this sense it is not yet correct to consider traditionally evolved parallel computing and distributed computing as two separate research disciplines. To exploit the parallel processing capability of this kind of cluster, the application program must be made parallel. An effective way ofdoing this for (parallelisation strategy) belongs to the most important step in developing an effective parallel algorithm (optimisation). Forbehaviour analysis we have to take into account all the overheads that have an influence on the performance of parallel algorithms (architecture, computation, communication etc.). |
Beschreibung: | 1 online resource (308 pages) |
Bibliographie: | Includes bibliographical references. |
ISBN: | 9781909287914 1909287911 |
Internformat
MARC
LEADER | 00000cam a2200000 i 4500 | ||
---|---|---|---|
001 | ZDB-4-EBA-ocn893674535 | ||
003 | OCoLC | ||
005 | 20241004212047.0 | ||
006 | m o d | ||
007 | cr cn||||||||| | ||
008 | 141013t20142014enk ob 000 0 eng d | ||
040 | |a E7B |b eng |e rda |e pn |c E7B |d OCLCO |d N$T |d YDXCP |d OCLCF |d MFS |d OCLCQ |d EBLCP |d DEBSZ |d OCLCQ |d LTP |d OCLCQ |d ZCU |d MERUC |d OCLCQ |d VTS |d ICG |d OCLCQ |d STF |d DKC |d OCLCQ |d AU@ |d OCLCQ |d M8D |d UKAHL |d OCLCQ |d K6U |d OCLCO |d OCLCQ |d OCLCO |d OCLCL | ||
019 | |a 900008037 |a 923627205 | ||
020 | |a 9781909287914 |q (electronic bk.) | ||
020 | |a 1909287911 |q (electronic bk.) | ||
020 | |z 9781909287907 | ||
035 | |a (OCoLC)893674535 |z (OCoLC)900008037 |z (OCoLC)923627205 | ||
050 | 4 | |a QA76.58 |b .H36 2014eb | |
072 | 7 | |a COM |x 013000 |2 bisacsh | |
072 | 7 | |a COM |x 014000 |2 bisacsh | |
072 | 7 | |a COM |x 018000 |2 bisacsh | |
072 | 7 | |a COM |x 067000 |2 bisacsh | |
072 | 7 | |a COM |x 032000 |2 bisacsh | |
072 | 7 | |a COM |x 037000 |2 bisacsh | |
072 | 7 | |a COM |x 052000 |2 bisacsh | |
082 | 7 | |a 004.35 |2 23 | |
049 | |a MAIN | ||
100 | 1 | |a Hanuliak, Peter, |e author. | |
245 | 1 | 0 | |a Analytical modelling in parallel and distributed computing / |c Peter Hanuliak and Michal Hanuliak. |
264 | 1 | |a Oxford [England] : |b Chartridge Books Oxford, |c 2014. | |
264 | 4 | |c ©2014 | |
300 | |a 1 online resource (308 pages) | ||
336 | |a text |b txt |2 rdacontent | ||
337 | |a computer |b c |2 rdamedia | ||
338 | |a online resource |b cr |2 rdacarrier | ||
504 | |a Includes bibliographical references. | ||
588 | 0 | |a Online resource; title from PDF title page (ebrary, viewed October 02, 2014). | |
505 | 0 | |a Cover -- Halftitle -- Title -- Copyright -- Contents -- Preview -- Acknowledgements -- Part I. Parallel Computing -- Introduction -- Developing periods in parallel computing -- 1 Modelling of Parallel Computers and Algorithms -- Model construction -- 2 Parallel Computers -- Classification -- Architectures of parallel computers -- Symmetrical multiprocessor system -- Network of workstations -- Grid systems -- Conventional HPC environment versus Grid environments -- Integration of parallel computers -- Metacomputing | |
505 | 8 | |a Modelling of parallel computers including communication networks3 Parallel Algorithms -- Introduction -- Parallel processes -- Classification of PAs -- Parallel algorithms with shared memory -- Parallel algorithms with distributed memory -- Developing parallel algorithms -- Decomposition strategies -- Natural parallel decomposition -- Domain decomposition -- Functional decomposition -- Mapping -- Inter process communication -- Inter process communication in shared memory -- Inter process communication in distributed memory -- Performance tuning | |
505 | 8 | |a 4 Parallel Program Developing StandardsParallel programming languages -- Open MP standard -- Open MP threads -- Problem decomposition -- MPI API standard -- MPI parallel algorithms -- Task Groups -- Communicators -- The order of tasks -- Collective MPI commands -- Synchronisation mechanisms -- Conditional synchronisation -- Rendezvous -- Synchronisation command barriers -- MPI collective communication mechanisms -- Data scattering collective communication commands -- Java -- 5 Parallel Computing Models -- SPMD model of parallel computation | |
505 | 8 | |a Fixed (atomic) networkPRAM model -- Fixed communication model GRAM -- Flexible models -- Flexible GRAM model -- The BSP model -- Computational model MPMD functionality of all system resources -- Load of communication network -- 6 The Role of Performance -- Performance evaluation methods -- Analytic techniques -- Asymptotic (order) analysis -- Application of queuing theory systems -- Kendall classification -- The simulation method -- Experimental measurement -- Part II. Theoretical aspects of PA -- 7 Performance Modelling of Parallel Algorithms | |
505 | 8 | |a Speed upEfficiency -- Isoefficiency -- Complex performance evaluation -- Conclusion and perspectives -- 8 Modelling in Parallel Algorithms -- Latencies of PA -- Part III. Applied Parallel Algorithms -- 9 Numerical Integration -- Decomposition model -- Mapping of parallel processes -- Performance optimisation -- Chosen illustration results -- 10 Synchronous Matrix Multiplication -- The systolic matrix multiplier -- Instruction systolic array matrix multiplier -- ISA matrix multiplier -- Dataflow matrix multiplication -- Wave front matrix multiplier | |
520 | |a Examines complex performance evaluation of various typical parallel algorithms (shared memory, distributed memory) and their practical implementations. Includes real application examples we demonstrate the various influences during the process of modelling and performanceevaluation and the consequences of their distributed parallel implementations. The current trends in High Performance Computing (HPC) are to use networks of workstations (NOW, SMP) or a network of NOW networks (Grid) as a cheaper alternative to the traditionally-used, massive parallel multiprocessors or supercomputers. Individual workstations could be single PCs (personal computers) used as parallelcomputers based on modern symmetric multicore or multiprocessor systems (SMPs) implemented inside the workstation. With the availability of powerful personal computers, workstations and networking devices, the latest trend in parallel computing is toconnect a number of individual workstations (PCs, PC SMPs) to solve computation-intensive tasks in a parallel way to typical clusters such as NOW, SMP and Grid. In this sense it is not yet correct to consider traditionally evolved parallel computing and distributed computing as two separate research disciplines. To exploit the parallel processing capability of this kind of cluster, the application program must be made parallel. An effective way ofdoing this for (parallelisation strategy) belongs to the most important step in developing an effective parallel algorithm (optimisation). Forbehaviour analysis we have to take into account all the overheads that have an influence on the performance of parallel algorithms (architecture, computation, communication etc.). | ||
650 | 0 | |a Parallel processing (Electronic computers) |0 http://id.loc.gov/authorities/subjects/sh85097826 | |
650 | 0 | |a Electronic data processing |x Distributed processing. |0 http://id.loc.gov/authorities/subjects/sh85042293 | |
650 | 6 | |a Parallélisme (Informatique) | |
650 | 6 | |a Traitement réparti. | |
650 | 7 | |a COMPUTERS |x Computer Literacy. |2 bisacsh | |
650 | 7 | |a COMPUTERS |x Computer Science. |2 bisacsh | |
650 | 7 | |a COMPUTERS |x Data Processing. |2 bisacsh | |
650 | 7 | |a COMPUTERS |x Hardware |x General. |2 bisacsh | |
650 | 7 | |a COMPUTERS |x Information Technology. |2 bisacsh | |
650 | 7 | |a COMPUTERS |x Machine Theory. |2 bisacsh | |
650 | 7 | |a COMPUTERS |x Reference. |2 bisacsh | |
650 | 7 | |a Electronic data processing |x Distributed processing |2 fast | |
650 | 7 | |a Parallel processing (Electronic computers) |2 fast | |
700 | 1 | |a Hanuliak, Michal, |e author. | |
758 | |i has work: |a Analytical Modelling in Parallel and Distributed Computing (Text) |1 https://id.oclc.org/worldcat/entity/E39PCYfWHGkQHKcj9P87rYyrYK |4 https://id.oclc.org/worldcat/ontology/hasWork | ||
776 | 0 | 8 | |i Print version: |a Hanuliak, Peter. |t Analytical modelling in parallel and distributed computing. |d Oxford, [England] : Chandos Books Oxford, ©2014 |h xiv, 290 pages |z 9781909287907 |
856 | 4 | 0 | |l FWS01 |p ZDB-4-EBA |q FWS_PDA_EBA |u https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=843642 |3 Volltext |
936 | |a BATCHLOAD | ||
938 | |a Askews and Holts Library Services |b ASKH |n AH34295212 | ||
938 | |a ebrary |b EBRY |n ebr10935634 | ||
938 | |a EBSCOhost |b EBSC |n 843642 | ||
938 | |a YBP Library Services |b YANK |n 12070936 | ||
994 | |a 92 |b GEBAY | ||
912 | |a ZDB-4-EBA | ||
049 | |a DE-863 |
Datensatz im Suchindex
DE-BY-FWS_katkey | ZDB-4-EBA-ocn893674535 |
---|---|
_version_ | 1816882291047137280 |
adam_text | |
any_adam_object | |
author | Hanuliak, Peter Hanuliak, Michal |
author_facet | Hanuliak, Peter Hanuliak, Michal |
author_role | aut aut |
author_sort | Hanuliak, Peter |
author_variant | p h ph m h mh |
building | Verbundindex |
bvnumber | localFWS |
callnumber-first | Q - Science |
callnumber-label | QA76 |
callnumber-raw | QA76.58 .H36 2014eb |
callnumber-search | QA76.58 .H36 2014eb |
callnumber-sort | QA 276.58 H36 42014EB |
callnumber-subject | QA - Mathematics |
collection | ZDB-4-EBA |
contents | Cover -- Halftitle -- Title -- Copyright -- Contents -- Preview -- Acknowledgements -- Part I. Parallel Computing -- Introduction -- Developing periods in parallel computing -- 1 Modelling of Parallel Computers and Algorithms -- Model construction -- 2 Parallel Computers -- Classification -- Architectures of parallel computers -- Symmetrical multiprocessor system -- Network of workstations -- Grid systems -- Conventional HPC environment versus Grid environments -- Integration of parallel computers -- Metacomputing Modelling of parallel computers including communication networks3 Parallel Algorithms -- Introduction -- Parallel processes -- Classification of PAs -- Parallel algorithms with shared memory -- Parallel algorithms with distributed memory -- Developing parallel algorithms -- Decomposition strategies -- Natural parallel decomposition -- Domain decomposition -- Functional decomposition -- Mapping -- Inter process communication -- Inter process communication in shared memory -- Inter process communication in distributed memory -- Performance tuning 4 Parallel Program Developing StandardsParallel programming languages -- Open MP standard -- Open MP threads -- Problem decomposition -- MPI API standard -- MPI parallel algorithms -- Task Groups -- Communicators -- The order of tasks -- Collective MPI commands -- Synchronisation mechanisms -- Conditional synchronisation -- Rendezvous -- Synchronisation command barriers -- MPI collective communication mechanisms -- Data scattering collective communication commands -- Java -- 5 Parallel Computing Models -- SPMD model of parallel computation Fixed (atomic) networkPRAM model -- Fixed communication model GRAM -- Flexible models -- Flexible GRAM model -- The BSP model -- Computational model MPMD functionality of all system resources -- Load of communication network -- 6 The Role of Performance -- Performance evaluation methods -- Analytic techniques -- Asymptotic (order) analysis -- Application of queuing theory systems -- Kendall classification -- The simulation method -- Experimental measurement -- Part II. Theoretical aspects of PA -- 7 Performance Modelling of Parallel Algorithms Speed upEfficiency -- Isoefficiency -- Complex performance evaluation -- Conclusion and perspectives -- 8 Modelling in Parallel Algorithms -- Latencies of PA -- Part III. Applied Parallel Algorithms -- 9 Numerical Integration -- Decomposition model -- Mapping of parallel processes -- Performance optimisation -- Chosen illustration results -- 10 Synchronous Matrix Multiplication -- The systolic matrix multiplier -- Instruction systolic array matrix multiplier -- ISA matrix multiplier -- Dataflow matrix multiplication -- Wave front matrix multiplier |
ctrlnum | (OCoLC)893674535 |
dewey-full | 004.35 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 004 - Computer science |
dewey-raw | 004.35 |
dewey-search | 004.35 |
dewey-sort | 14.35 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>07705cam a2200757 i 4500</leader><controlfield tag="001">ZDB-4-EBA-ocn893674535</controlfield><controlfield tag="003">OCoLC</controlfield><controlfield tag="005">20241004212047.0</controlfield><controlfield tag="006">m o d </controlfield><controlfield tag="007">cr cn|||||||||</controlfield><controlfield tag="008">141013t20142014enk ob 000 0 eng d</controlfield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">E7B</subfield><subfield code="b">eng</subfield><subfield code="e">rda</subfield><subfield code="e">pn</subfield><subfield code="c">E7B</subfield><subfield code="d">OCLCO</subfield><subfield code="d">N$T</subfield><subfield code="d">YDXCP</subfield><subfield code="d">OCLCF</subfield><subfield code="d">MFS</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">EBLCP</subfield><subfield code="d">DEBSZ</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">LTP</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">ZCU</subfield><subfield code="d">MERUC</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">VTS</subfield><subfield code="d">ICG</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">STF</subfield><subfield code="d">DKC</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">AU@</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">M8D</subfield><subfield code="d">UKAHL</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">K6U</subfield><subfield code="d">OCLCO</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">OCLCO</subfield><subfield code="d">OCLCL</subfield></datafield><datafield tag="019" ind1=" " ind2=" "><subfield code="a">900008037</subfield><subfield code="a">923627205</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781909287914</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">1909287911</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">9781909287907</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)893674535</subfield><subfield code="z">(OCoLC)900008037</subfield><subfield code="z">(OCoLC)923627205</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">QA76.58</subfield><subfield code="b">.H36 2014eb</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">COM</subfield><subfield code="x">013000</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">COM</subfield><subfield code="x">014000</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">COM</subfield><subfield code="x">018000</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">COM</subfield><subfield code="x">067000</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">COM</subfield><subfield code="x">032000</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">COM</subfield><subfield code="x">037000</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">COM</subfield><subfield code="x">052000</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="082" ind1="7" ind2=" "><subfield code="a">004.35</subfield><subfield code="2">23</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">MAIN</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Hanuliak, Peter,</subfield><subfield code="e">author.</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Analytical modelling in parallel and distributed computing /</subfield><subfield code="c">Peter Hanuliak and Michal Hanuliak.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Oxford [England] :</subfield><subfield code="b">Chartridge Books Oxford,</subfield><subfield code="c">2014.</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">©2014</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (308 pages)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="504" ind1=" " ind2=" "><subfield code="a">Includes bibliographical references.</subfield></datafield><datafield tag="588" ind1="0" ind2=" "><subfield code="a">Online resource; title from PDF title page (ebrary, viewed October 02, 2014).</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">Cover -- Halftitle -- Title -- Copyright -- Contents -- Preview -- Acknowledgements -- Part I. Parallel Computing -- Introduction -- Developing periods in parallel computing -- 1 Modelling of Parallel Computers and Algorithms -- Model construction -- 2 Parallel Computers -- Classification -- Architectures of parallel computers -- Symmetrical multiprocessor system -- Network of workstations -- Grid systems -- Conventional HPC environment versus Grid environments -- Integration of parallel computers -- Metacomputing</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Modelling of parallel computers including communication networks3 Parallel Algorithms -- Introduction -- Parallel processes -- Classification of PAs -- Parallel algorithms with shared memory -- Parallel algorithms with distributed memory -- Developing parallel algorithms -- Decomposition strategies -- Natural parallel decomposition -- Domain decomposition -- Functional decomposition -- Mapping -- Inter process communication -- Inter process communication in shared memory -- Inter process communication in distributed memory -- Performance tuning</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">4 Parallel Program Developing StandardsParallel programming languages -- Open MP standard -- Open MP threads -- Problem decomposition -- MPI API standard -- MPI parallel algorithms -- Task Groups -- Communicators -- The order of tasks -- Collective MPI commands -- Synchronisation mechanisms -- Conditional synchronisation -- Rendezvous -- Synchronisation command barriers -- MPI collective communication mechanisms -- Data scattering collective communication commands -- Java -- 5 Parallel Computing Models -- SPMD model of parallel computation</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Fixed (atomic) networkPRAM model -- Fixed communication model GRAM -- Flexible models -- Flexible GRAM model -- The BSP model -- Computational model MPMD functionality of all system resources -- Load of communication network -- 6 The Role of Performance -- Performance evaluation methods -- Analytic techniques -- Asymptotic (order) analysis -- Application of queuing theory systems -- Kendall classification -- The simulation method -- Experimental measurement -- Part II. Theoretical aspects of PA -- 7 Performance Modelling of Parallel Algorithms</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Speed upEfficiency -- Isoefficiency -- Complex performance evaluation -- Conclusion and perspectives -- 8 Modelling in Parallel Algorithms -- Latencies of PA -- Part III. Applied Parallel Algorithms -- 9 Numerical Integration -- Decomposition model -- Mapping of parallel processes -- Performance optimisation -- Chosen illustration results -- 10 Synchronous Matrix Multiplication -- The systolic matrix multiplier -- Instruction systolic array matrix multiplier -- ISA matrix multiplier -- Dataflow matrix multiplication -- Wave front matrix multiplier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Examines complex performance evaluation of various typical parallel algorithms (shared memory, distributed memory) and their practical implementations. Includes real application examples we demonstrate the various influences during the process of modelling and performanceevaluation and the consequences of their distributed parallel implementations. The current trends in High Performance Computing (HPC) are to use networks of workstations (NOW, SMP) or a network of NOW networks (Grid) as a cheaper alternative to the traditionally-used, massive parallel multiprocessors or supercomputers. Individual workstations could be single PCs (personal computers) used as parallelcomputers based on modern symmetric multicore or multiprocessor systems (SMPs) implemented inside the workstation. With the availability of powerful personal computers, workstations and networking devices, the latest trend in parallel computing is toconnect a number of individual workstations (PCs, PC SMPs) to solve computation-intensive tasks in a parallel way to typical clusters such as NOW, SMP and Grid. In this sense it is not yet correct to consider traditionally evolved parallel computing and distributed computing as two separate research disciplines. To exploit the parallel processing capability of this kind of cluster, the application program must be made parallel. An effective way ofdoing this for (parallelisation strategy) belongs to the most important step in developing an effective parallel algorithm (optimisation). Forbehaviour analysis we have to take into account all the overheads that have an influence on the performance of parallel algorithms (architecture, computation, communication etc.).</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Parallel processing (Electronic computers)</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh85097826</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Electronic data processing</subfield><subfield code="x">Distributed processing.</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh85042293</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Parallélisme (Informatique)</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Traitement réparti.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS</subfield><subfield code="x">Computer Literacy.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS</subfield><subfield code="x">Computer Science.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS</subfield><subfield code="x">Data Processing.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS</subfield><subfield code="x">Hardware</subfield><subfield code="x">General.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS</subfield><subfield code="x">Information Technology.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS</subfield><subfield code="x">Machine Theory.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS</subfield><subfield code="x">Reference.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Electronic data processing</subfield><subfield code="x">Distributed processing</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Parallel processing (Electronic computers)</subfield><subfield code="2">fast</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Hanuliak, Michal,</subfield><subfield code="e">author.</subfield></datafield><datafield tag="758" ind1=" " ind2=" "><subfield code="i">has work:</subfield><subfield code="a">Analytical Modelling in Parallel and Distributed Computing (Text)</subfield><subfield code="1">https://id.oclc.org/worldcat/entity/E39PCYfWHGkQHKcj9P87rYyrYK</subfield><subfield code="4">https://id.oclc.org/worldcat/ontology/hasWork</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Print version:</subfield><subfield code="a">Hanuliak, Peter.</subfield><subfield code="t">Analytical modelling in parallel and distributed computing.</subfield><subfield code="d">Oxford, [England] : Chandos Books Oxford, ©2014</subfield><subfield code="h">xiv, 290 pages</subfield><subfield code="z">9781909287907</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="l">FWS01</subfield><subfield code="p">ZDB-4-EBA</subfield><subfield code="q">FWS_PDA_EBA</subfield><subfield code="u">https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=843642</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="936" ind1=" " ind2=" "><subfield code="a">BATCHLOAD</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">Askews and Holts Library Services</subfield><subfield code="b">ASKH</subfield><subfield code="n">AH34295212</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">ebrary</subfield><subfield code="b">EBRY</subfield><subfield code="n">ebr10935634</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">EBSCOhost</subfield><subfield code="b">EBSC</subfield><subfield code="n">843642</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">YBP Library Services</subfield><subfield code="b">YANK</subfield><subfield code="n">12070936</subfield></datafield><datafield tag="994" ind1=" " ind2=" "><subfield code="a">92</subfield><subfield code="b">GEBAY</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-4-EBA</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-863</subfield></datafield></record></collection> |
id | ZDB-4-EBA-ocn893674535 |
illustrated | Not Illustrated |
indexdate | 2024-11-27T13:26:16Z |
institution | BVB |
isbn | 9781909287914 1909287911 |
language | English |
oclc_num | 893674535 |
open_access_boolean | |
owner | MAIN DE-863 DE-BY-FWS |
owner_facet | MAIN DE-863 DE-BY-FWS |
physical | 1 online resource (308 pages) |
psigel | ZDB-4-EBA |
publishDate | 2014 |
publishDateSearch | 2014 |
publishDateSort | 2014 |
publisher | Chartridge Books Oxford, |
record_format | marc |
spelling | Hanuliak, Peter, author. Analytical modelling in parallel and distributed computing / Peter Hanuliak and Michal Hanuliak. Oxford [England] : Chartridge Books Oxford, 2014. ©2014 1 online resource (308 pages) text txt rdacontent computer c rdamedia online resource cr rdacarrier Includes bibliographical references. Online resource; title from PDF title page (ebrary, viewed October 02, 2014). Cover -- Halftitle -- Title -- Copyright -- Contents -- Preview -- Acknowledgements -- Part I. Parallel Computing -- Introduction -- Developing periods in parallel computing -- 1 Modelling of Parallel Computers and Algorithms -- Model construction -- 2 Parallel Computers -- Classification -- Architectures of parallel computers -- Symmetrical multiprocessor system -- Network of workstations -- Grid systems -- Conventional HPC environment versus Grid environments -- Integration of parallel computers -- Metacomputing Modelling of parallel computers including communication networks3 Parallel Algorithms -- Introduction -- Parallel processes -- Classification of PAs -- Parallel algorithms with shared memory -- Parallel algorithms with distributed memory -- Developing parallel algorithms -- Decomposition strategies -- Natural parallel decomposition -- Domain decomposition -- Functional decomposition -- Mapping -- Inter process communication -- Inter process communication in shared memory -- Inter process communication in distributed memory -- Performance tuning 4 Parallel Program Developing StandardsParallel programming languages -- Open MP standard -- Open MP threads -- Problem decomposition -- MPI API standard -- MPI parallel algorithms -- Task Groups -- Communicators -- The order of tasks -- Collective MPI commands -- Synchronisation mechanisms -- Conditional synchronisation -- Rendezvous -- Synchronisation command barriers -- MPI collective communication mechanisms -- Data scattering collective communication commands -- Java -- 5 Parallel Computing Models -- SPMD model of parallel computation Fixed (atomic) networkPRAM model -- Fixed communication model GRAM -- Flexible models -- Flexible GRAM model -- The BSP model -- Computational model MPMD functionality of all system resources -- Load of communication network -- 6 The Role of Performance -- Performance evaluation methods -- Analytic techniques -- Asymptotic (order) analysis -- Application of queuing theory systems -- Kendall classification -- The simulation method -- Experimental measurement -- Part II. Theoretical aspects of PA -- 7 Performance Modelling of Parallel Algorithms Speed upEfficiency -- Isoefficiency -- Complex performance evaluation -- Conclusion and perspectives -- 8 Modelling in Parallel Algorithms -- Latencies of PA -- Part III. Applied Parallel Algorithms -- 9 Numerical Integration -- Decomposition model -- Mapping of parallel processes -- Performance optimisation -- Chosen illustration results -- 10 Synchronous Matrix Multiplication -- The systolic matrix multiplier -- Instruction systolic array matrix multiplier -- ISA matrix multiplier -- Dataflow matrix multiplication -- Wave front matrix multiplier Examines complex performance evaluation of various typical parallel algorithms (shared memory, distributed memory) and their practical implementations. Includes real application examples we demonstrate the various influences during the process of modelling and performanceevaluation and the consequences of their distributed parallel implementations. The current trends in High Performance Computing (HPC) are to use networks of workstations (NOW, SMP) or a network of NOW networks (Grid) as a cheaper alternative to the traditionally-used, massive parallel multiprocessors or supercomputers. Individual workstations could be single PCs (personal computers) used as parallelcomputers based on modern symmetric multicore or multiprocessor systems (SMPs) implemented inside the workstation. With the availability of powerful personal computers, workstations and networking devices, the latest trend in parallel computing is toconnect a number of individual workstations (PCs, PC SMPs) to solve computation-intensive tasks in a parallel way to typical clusters such as NOW, SMP and Grid. In this sense it is not yet correct to consider traditionally evolved parallel computing and distributed computing as two separate research disciplines. To exploit the parallel processing capability of this kind of cluster, the application program must be made parallel. An effective way ofdoing this for (parallelisation strategy) belongs to the most important step in developing an effective parallel algorithm (optimisation). Forbehaviour analysis we have to take into account all the overheads that have an influence on the performance of parallel algorithms (architecture, computation, communication etc.). Parallel processing (Electronic computers) http://id.loc.gov/authorities/subjects/sh85097826 Electronic data processing Distributed processing. http://id.loc.gov/authorities/subjects/sh85042293 Parallélisme (Informatique) Traitement réparti. COMPUTERS Computer Literacy. bisacsh COMPUTERS Computer Science. bisacsh COMPUTERS Data Processing. bisacsh COMPUTERS Hardware General. bisacsh COMPUTERS Information Technology. bisacsh COMPUTERS Machine Theory. bisacsh COMPUTERS Reference. bisacsh Electronic data processing Distributed processing fast Parallel processing (Electronic computers) fast Hanuliak, Michal, author. has work: Analytical Modelling in Parallel and Distributed Computing (Text) https://id.oclc.org/worldcat/entity/E39PCYfWHGkQHKcj9P87rYyrYK https://id.oclc.org/worldcat/ontology/hasWork Print version: Hanuliak, Peter. Analytical modelling in parallel and distributed computing. Oxford, [England] : Chandos Books Oxford, ©2014 xiv, 290 pages 9781909287907 FWS01 ZDB-4-EBA FWS_PDA_EBA https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=843642 Volltext |
spellingShingle | Hanuliak, Peter Hanuliak, Michal Analytical modelling in parallel and distributed computing / Cover -- Halftitle -- Title -- Copyright -- Contents -- Preview -- Acknowledgements -- Part I. Parallel Computing -- Introduction -- Developing periods in parallel computing -- 1 Modelling of Parallel Computers and Algorithms -- Model construction -- 2 Parallel Computers -- Classification -- Architectures of parallel computers -- Symmetrical multiprocessor system -- Network of workstations -- Grid systems -- Conventional HPC environment versus Grid environments -- Integration of parallel computers -- Metacomputing Modelling of parallel computers including communication networks3 Parallel Algorithms -- Introduction -- Parallel processes -- Classification of PAs -- Parallel algorithms with shared memory -- Parallel algorithms with distributed memory -- Developing parallel algorithms -- Decomposition strategies -- Natural parallel decomposition -- Domain decomposition -- Functional decomposition -- Mapping -- Inter process communication -- Inter process communication in shared memory -- Inter process communication in distributed memory -- Performance tuning 4 Parallel Program Developing StandardsParallel programming languages -- Open MP standard -- Open MP threads -- Problem decomposition -- MPI API standard -- MPI parallel algorithms -- Task Groups -- Communicators -- The order of tasks -- Collective MPI commands -- Synchronisation mechanisms -- Conditional synchronisation -- Rendezvous -- Synchronisation command barriers -- MPI collective communication mechanisms -- Data scattering collective communication commands -- Java -- 5 Parallel Computing Models -- SPMD model of parallel computation Fixed (atomic) networkPRAM model -- Fixed communication model GRAM -- Flexible models -- Flexible GRAM model -- The BSP model -- Computational model MPMD functionality of all system resources -- Load of communication network -- 6 The Role of Performance -- Performance evaluation methods -- Analytic techniques -- Asymptotic (order) analysis -- Application of queuing theory systems -- Kendall classification -- The simulation method -- Experimental measurement -- Part II. Theoretical aspects of PA -- 7 Performance Modelling of Parallel Algorithms Speed upEfficiency -- Isoefficiency -- Complex performance evaluation -- Conclusion and perspectives -- 8 Modelling in Parallel Algorithms -- Latencies of PA -- Part III. Applied Parallel Algorithms -- 9 Numerical Integration -- Decomposition model -- Mapping of parallel processes -- Performance optimisation -- Chosen illustration results -- 10 Synchronous Matrix Multiplication -- The systolic matrix multiplier -- Instruction systolic array matrix multiplier -- ISA matrix multiplier -- Dataflow matrix multiplication -- Wave front matrix multiplier Parallel processing (Electronic computers) http://id.loc.gov/authorities/subjects/sh85097826 Electronic data processing Distributed processing. http://id.loc.gov/authorities/subjects/sh85042293 Parallélisme (Informatique) Traitement réparti. COMPUTERS Computer Literacy. bisacsh COMPUTERS Computer Science. bisacsh COMPUTERS Data Processing. bisacsh COMPUTERS Hardware General. bisacsh COMPUTERS Information Technology. bisacsh COMPUTERS Machine Theory. bisacsh COMPUTERS Reference. bisacsh Electronic data processing Distributed processing fast Parallel processing (Electronic computers) fast |
subject_GND | http://id.loc.gov/authorities/subjects/sh85097826 http://id.loc.gov/authorities/subjects/sh85042293 |
title | Analytical modelling in parallel and distributed computing / |
title_auth | Analytical modelling in parallel and distributed computing / |
title_exact_search | Analytical modelling in parallel and distributed computing / |
title_full | Analytical modelling in parallel and distributed computing / Peter Hanuliak and Michal Hanuliak. |
title_fullStr | Analytical modelling in parallel and distributed computing / Peter Hanuliak and Michal Hanuliak. |
title_full_unstemmed | Analytical modelling in parallel and distributed computing / Peter Hanuliak and Michal Hanuliak. |
title_short | Analytical modelling in parallel and distributed computing / |
title_sort | analytical modelling in parallel and distributed computing |
topic | Parallel processing (Electronic computers) http://id.loc.gov/authorities/subjects/sh85097826 Electronic data processing Distributed processing. http://id.loc.gov/authorities/subjects/sh85042293 Parallélisme (Informatique) Traitement réparti. COMPUTERS Computer Literacy. bisacsh COMPUTERS Computer Science. bisacsh COMPUTERS Data Processing. bisacsh COMPUTERS Hardware General. bisacsh COMPUTERS Information Technology. bisacsh COMPUTERS Machine Theory. bisacsh COMPUTERS Reference. bisacsh Electronic data processing Distributed processing fast Parallel processing (Electronic computers) fast |
topic_facet | Parallel processing (Electronic computers) Electronic data processing Distributed processing. Parallélisme (Informatique) Traitement réparti. COMPUTERS Computer Literacy. COMPUTERS Computer Science. COMPUTERS Data Processing. COMPUTERS Hardware General. COMPUTERS Information Technology. COMPUTERS Machine Theory. COMPUTERS Reference. Electronic data processing Distributed processing |
url | https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=843642 |
work_keys_str_mv | AT hanuliakpeter analyticalmodellinginparallelanddistributedcomputing AT hanuliakmichal analyticalmodellinginparallelanddistributedcomputing |