Programming models for parallel computing:
Gespeichert in:
Format: | Buch |
---|---|
Sprache: | English |
Veröffentlicht: |
Cambridge, Massachusetts
The MIT Press
[2015]
|
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis Klappentext |
Beschreibung: | Includes bibliographical references (pages 429-458) |
Beschreibung: | xxv, 458 Seiten Illustrationen 23 cm |
ISBN: | 9780262528818 0262528819 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV043452443 | ||
003 | DE-604 | ||
005 | 20190102 | ||
007 | t | ||
008 | 160311s2015 xxua||| |||| 00||| eng d | ||
010 | |a 015039693 | ||
020 | |a 9780262528818 |9 978-0-262-52881-8 | ||
020 | |a 0262528819 |9 0-262-52881-9 | ||
035 | |a (OCoLC)933836376 | ||
035 | |a (DE-599)BVBBV043452443 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
044 | |a xxu |c US | ||
049 | |a DE-739 |a DE-523 |a DE-634 |a DE-91 | ||
050 | 0 | |a QA76.58 | |
082 | 0 | |a 004/.35 |2 23 | |
084 | |a ST 151 |0 (DE-625)143595: |2 rvk | ||
084 | |a DAT 516f |2 stub | ||
245 | 1 | 0 | |a Programming models for parallel computing |c edited by Pavan Balaji |
264 | 1 | |a Cambridge, Massachusetts |b The MIT Press |c [2015] | |
264 | 4 | |c © 2015 | |
300 | |a xxv, 458 Seiten |b Illustrationen |c 23 cm | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
500 | |a Includes bibliographical references (pages 429-458) | ||
650 | 4 | |a Parallel processing (Electronic computers) | |
650 | 4 | |a Parallel programs (Computer programs) | |
650 | 0 | 7 | |a Programmierung |0 (DE-588)4076370-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Parallelverarbeitung |0 (DE-588)4075860-6 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Parallelrechner |0 (DE-588)4173280-7 |2 gnd |9 rswk-swf |
689 | 0 | 0 | |a Parallelverarbeitung |0 (DE-588)4075860-6 |D s |
689 | 0 | 1 | |a Programmierung |0 (DE-588)4076370-5 |D s |
689 | 0 | |5 DE-604 | |
689 | 1 | 0 | |a Parallelrechner |0 (DE-588)4173280-7 |D s |
689 | 1 | 1 | |a Programmierung |0 (DE-588)4076370-5 |D s |
689 | 1 | |5 DE-604 | |
700 | 1 | |a Balaji, Pavan |e Sonstige |0 (DE-588)1048422852 |4 oth | |
776 | 0 | 8 | |i Erscheint auch als |n Druck-Ausgabe |z 978-0-262-33224-8 |w (DE-604)BV044032171 |
856 | 4 | 2 | |m Digitalisierung UB Passau - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=028869822&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
856 | 4 | 2 | |m Digitalisierung UB Passau - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=028869822&sequence=000002&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA |3 Klappentext |
999 | |a oai:aleph.bib-bvb.de:BVB01-028869822 |
Datensatz im Suchindex
_version_ | 1804176061873782784 |
---|---|
adam_text | With the coming of the parallel computing era, computer scientists have turned
their attention to designing programming models that are suited for high-
performance parallel computing and supercomputing systems. Programming
parallel systems is complicated by the fact that multiple processing units are
simultaneously computing and moving data. This book offers an overview
of some of the most prominent parallel programming models used in high-
performance computing and supercomputing systems today.
The chapters describe the programming models in a unique tutorial style
rather than using the formal approach taken in the research literature. The aim
is to cover a wide range of parallel programming models, enabling the reader
to understandQwhat each has to offer. The book begins with a description of
the Message Passing Interface (MPI), the most common parallel programming
model for distributed memory computing. It goes on to cover one-sided
communication models, ranging from low-level runtime libraries (GASNet,
OpenSHMEM) to high-level programming models (UPC, GA, Chapel); task-
oriented programming models (Charm++, ADLB, Scioto, Swift, CnC) that allow
users to describe their computation and data units as tasks so that the runtime
system can manage computation and data movement as necessary; and
parallel programming models intended for on-node parallelism in the context
of multicore architecture or attached accelerators (OpenMP, Cilk Plus, TBB,
CUDA, OpenCL). The book will be a valuable resource for graduate students,
researchers, and any scientist who works with data sets and large computations.
Contents
Series Foreword xix
Preface xxi
1 Message Passing Interface---W. Gropp and R. Thakur 1
1.1 Introduction 1
1.2 MPI Basics 2
1.3 Point-to֊Point Communication 3
1.4 Datatypes 4
1.5 Nonblocking Communication 6
1.6 Collective Communication 8
1.7 One-Sided Communication 10
1.8 Parallel I/O 14
1.9 Other Features 17
1.10 Best Practices 18
1.11 Summary 20
2 Global Address Space Networking---P. Hargrove 23
2.1 Background and Motivation 23
2.2 Overview of GASNet 23
2.2.1 Terminology 24
2.2.2 Threading 25
2.2.3 API Organization 26
2.3 Core API 26
2.3.1 Beginnings and Endings 27
2.3.2 Segment Info 29
2.3.3 Barriers 30
2.3.4 Locks and Interrupts 31
2.3.5 Active Messages 32
2.3.6 Active Message Progress 36
2.3.7 Active Message Rules and Restrictions 36
2.3.8 Error Codes 37
viii Contents
2.4 Extended API 38
2.4.1 The GASNet Segment 38
2.4.2 Ordering and Memory Model 39
2.4.3 Blocking and Nonblocking 39
2.4.4 Bulk and Nonbulk 40
2.4.5 Register-Memory and Remote memset Operations 40
2.4.6 Extended API Summary 40
2.5 Extras 42
2.5.1 GASNet Tools 43
2.5.2 Portable Platform Header 43
2.6 Examples 44
2.6.1 Building and Running Examples 44
2.6.2 Hello World Example 45
2.6.3 AM Ping-Pong Example 46
2.6.4 AM Ring Example 49
2.6.5 MCS Locks Example 51
2.7 Future Directions 56
3 OpenSHMEM —J. Kuehn and S. Poole 59
3.1 Introduction 5 9
3.2 Design Philosophy and Considerations 60
3.3 The OpenSHMEM Memory Model 61
3.3.1 Terminology 62
3.4 Managing the Symmetric Heap and the Basics 63
3.4.1 Initialization and Query 63
3.4.2 Allocation and Deallocation 64
3.4.3 A Final Note on Allocation and the Symmetric Heap 66
3.5 Remote Memory Access: Put and Get 66
3.5.1 RMA Function Semantics 66
3.5.2 RMA Function Usage 67
3.6 Ordering and Synchronization 70
3.6.1 Global Barrier Synchronization 70
3.6.2 Fence and Quiet: Ordering of RMA Operations 71
Contents
IX
3.6.3 Locks 73
3.6.4 Wait and WaitJJntil 74
3.7 Collective Operations 74
3.7.1 Selecting the Participants for a Collective 75
3.7.2 Sync Arrays and Work Arrays 75
3.7.3 Nonglobal Barrier 76
3.7.4 Broadcast 77
3.7.5 Collect 78
3.7.6 Reduce 79
3.8 Atomic Memory Operations 81
3.8.1 Atomic Add and Increment 82
3.8.2 Atomic Fetch-Add and Fetch-Increment 82
3.8.3 Atomic Swap and Conditional Swap 83
3.9 Future Directions 85
4 Unified Parallel C---K. Yelick and Y Zheng 87
4.1 Brief History of UPC 87
4.2 UPC Programming Model 88
4.2.1 Terminology 88
4.2.2 Global Address Space 89
4.2.3 Execution Model 89
4.3 A Quick Tour of UPC 90
4.3.1 Self Introspection 90
4.3.2 Data Layout 90
4.3.3 Communication 94
4.3.4 UPC Memory Consistency Model 95
4.3.5 Synchronization 97
4.3.6 Collective Operations 98
4.4 Examples of UPC Programs 101
4A1 Random Access Benchmark 101
4.4.2 Jacobi 5-Point Stencil 102
4.4.3 Sample Sort 104
4.4.4 1-D FFT 108
4.5 Looking Forward 112
Contents
X
5 Global Arrays---5. Krishnamoorthy, J. Daily; A. Vishnu and 113
B. Palmer
5.1 Introduction 113
5.2 Programming Model and Design Principles 114
5.3 Core Functionality 116
5.4 Process Groups 121
5.5 Extended Array Structures 123
5.6 Support for Sparse Array Operations 124
5.7 Array Collectives 126
5.8 Dynamic Load Balancing Idiom 126
5.9 Use in Applications 127
6 Chapel---B. Chamberlain 129
6.1 A Brief History of Chapel 129
6.1.1 Inception 129
6.1.2 Initial Directions 130
6.1.3 Phases of Development under HPCS 131
6.1.4 Life After HPCS 132
6.2 Chapel’s Motivating Themes 132
6.2.1 Express General Parallelism 132
6.2.2 Support a Multithreaded Execution Model 133
6.2.3 Enable Global-View Programming 134
6.2.4 Build on a Multiresolution Design 134
6.2.5 Enable Control over Locality 135
6.2.6 Support Data-Centric Synchronization 135
6.2.7 Establish Roles for Users vs. the Compiler 136
6.2.8 Close the Gap Between Mainstream and HPC Languages 136
6.2.9 Start From Scratch (but Strive for Familiarity) 137
6.2.10 Shoot for the Moon 138
6.2.11 Develop Chapel as Portable, Open-Source Software 138
6.3 Chapel Feature Overview 139
6.3.1 Base Language Features 139
6.3.2 Task Parallelism 145
Contents
xi
6.3.3 Data Parallelism 151
6.3.4 Locality Features 154
6.4 Project Status 157
7 Charm++—L. Kale, N. Jain, andJ. Lifflander 161
7.1 Introduction 161
7.2 The Charm Programming Paradigm and the Execution Model 162
7.2.1 Overdecomposition as a Central Idea 162
7.2.2 Message Driven Execution 163
7.2.3 Empowering Adaptive Runtime System 164
7.3 Basic Language 164
7.3.1 Chares: The Basic Unit of Decomposition 164
7.3.2 Entry Methods: The Basic Unit of Scheduling 165
7.3.3 Asynchronous Method Invocation 166
7.3.4 Indexed Collections of Chares: Chare Arrays 166
7.3.5 Readonly Variables 168
7.3.6 Charm++ Objects: User and System view 168
7.3.7 The Structured Dagger Notation 170
7.3.8 Example: 1-D Decomposed 5-point Stencil Code 170
7.4 Benefits of Overdecomposition and Message Driven Execution 173
7.4.1 Independence from Number of Processors 173
7.4.2 Asynchronous Reductions 173
7.4.3 Adaptive Overlap of Communication and Computation 174
7.4.4 Compositionality 175
7.4.5 Software Engineering Benefits: Separation of Logical 175
Entities
7.5 A Design Example: Molecular Dynamics 175
7.6 Adaptive Runtime Features 176
7.6.1 Load balancing Capabilities in Charm++ 177
7.6.2 Fault Tolerance 178
7.6.3 Shrinking or Expanding the Set of Processors? 180
7.6.4 Support for Heterogeneity and Accelerators 180
7.6.5 Additional Features 181
7.6.6 Experimental Feature: Thermal and Energy Management 181
xii Contents
7.7 A Quick Look Under the Hood 181
7.8 Charm++ Family of Languages 182
7.9 Applications Developed using Charm++ 184
7.10 Charm֊f + as a Research Vehicle 185
7.11 Charm++: History and Current State 186
7.12 Summary 186
8 Asynchronous Dynamic Load Balancing—E. Lusk, 187
R. Butler, and 5. Pieper
8.1 Introduction 187
8.2 The Manager-Worker Model and Load Balancing 187
8.3 The ADLB Library Definition 190
8.3.1 API Introduction 191
8.3.2 The Basic ADLB API 191
8.3.3 Optimizing Memory Utilization with Batches 194
8.3.4 Obtaining and Using ADLB 195
8.4 Implementing ADLB 195
8.4.1 ADLBM Implementation 195
8.4.2 Alternative Implementations 197
8.5 Examples 198
8.5.1 A Simple Batch Scheduler 198
8.5.2 Dynamic Task Creation: A Sudoku Solver 198
8.5.3 Work Unit Types: The Traveling Salesman Problem 199
8.5.4 GFMC 201
8.5.5 Swift 201
8.6 DMEM: A Helper Library for Dealing with Large Data 201
8.7 Summary and Future Directions 202
9 Scalable Collections of Task Objects—J. Dinan 205
9.1 The Scioto Task Parallel Execution Model 206
9.1.1 Task Objects 207
9.1.2 Task Input/Output Model 208
9.1.3 Task Execution Model 209
Contents
xiii
9.2 Multilevel Parallel Task Collections
9.2.1 Scioto Execution Teams
9.3 Scioto+GA Programming Interface
9.3.1 Core Programming Constructs
9.3.2 Implementing a Scioto Task
9.3.3 Example: Matrix-Matrix Multiplication
9.4 The Scioto Runtime System
9.4.1 Shared Task Queue Approach
9.4.2 Dynamic Load Balancing Approach
9.4.3 Termination Detection
9.5 Conclusion
10 Swift: Extreme-scale, Implicitly Parallel
Scripting—T. Armstrong, J. M. Wozniak, M. Wilde and
/. T Foster
10.1 A First Example: Parallel Factorizations 220
10.2 A Real-World Example: Crystal Coordinate Transformation 221
10.3 History of Swift 222
10.4 Swift Language and Programming Model 224
10.4.1 Hello World 225
10.4.2 Variables and Scalar Data Types 226
10.4.3 Dataflow Execution 227
10.4.4 Conditional Statements 228
10.4.5 Data-dependent Control Flow 228
10.4.6 Foreach Loops and Arrays 229
10.4.7 Swift Functions 230
10.4.8 External Functions 231
10.4.9 Files and App Functions 233
10.5 The Swift Execution Model 234
10.6 A Massively Parallel Runtime System 236
10.7 Runtime Architecture 237
10.8 Performance 240
10.9 Compiling Swift for Massive Parallelism 241
210
211
211
212
212
213
213
215
215
216
217
219
xiv Contents
10.10 Related Work 242
10.11 Conclusion 244
11 Concurrent Collections---K. Knobe, M. Burke, and 247
F Schlimbach
11.1 Introduction 247
11.2 Motivation 248
11.2,1 Foundational Hypotheses 248
11.3 CnC Domain Language 249
11.3.1 Description 249
11.3.2 Characteristics 252
11.3.3 Example 253
11.3.4 Execution Semantics 256
11,3.5 Programming in CnC 257
11.3.6 Futures 263
11.4 CnC Tuning Language 264
11.4.1 Description 265
11.4.2 Characteristics 270
11.4.3 Examples 270
11.4.4 Execution Model 273
11.4.5 Futures 276
11.5 Current Status 276
11.6 Related Work 276
11.7 Conclusions 279
12 OpenMP---B. Chapman, D. EachempatU and 281
S. Chandrasekaran
12.1 Introduction 281
12.2 Overview 283
12.2.1 Terminology 283
12.2.2 Managing the Data Environment 284
12.2.3 Brief Tour of OpenMP Concepts 285
12.3 OpenMP Features 288
12.3.1 Parallel Regions 288
Contents
XV
12.3.2 Synchronization 294
12.3.3 Worksharing 297
12.3.4 Task Parallelism 305
12.3.5 Vectorization 311
12.3.6 Support for Accelerators 313
12.3.7 Region Cancellation 318
12.4 Performance Considerations 320
12.5 Correctness Considerations 321
12.6 Summary and Future Directions 322
13 Cilk Plus---A. Robison and C. Leiserson 323
13.1 Introduction 323
13.2 Vector Parallelism 325
13.2.1 Array Notation 325
13.2.2 Pragma SIMD 329
13.2.3 SIMD-Enabled Functions 330
13.3 Thread Parallelism 332
13.3.1 Reducers 334
13.4 Reasoning about Parallel Performance 338
13.5 Reasoning about Races 345
13.6 Practical Tips 347
13.7 History 351
13.8 Summary 352
14 Intel Threading Building Blocks---A. Kukanov 353
14.1 Introduction 353
14.1.1 Overview 353
14.1.2 General Information 354
14.2 Generic Parallel Algorithms 355
14.2.1 Parallelizing Simple Loops 355
14.2.2 Processing Data in STL Containers 356
14.2.3 Complex Iteration Spaces 358
14.2.4 Other Algorithms 361
362
362
363
364
367
371
371
373
373
375
376
379
381
385
387
389
391
391
397
399
399
400
400
402
403
404
405
409
413
Flow Graph
14.3.1 Overview
14.3.2 Node Communication Protocol
14.3.3 Control Dependency Graphs
14.3.4 Data Flow Graphs
14.3.5 Choosing Between a Flow Graph, Algorithms, or an
Acyclic Graph of Tasks
Summary
Compute Unified Device Architecture—W. Hwu and
D. Kirk
A Brief History Leading to CUDA
CUDA Program Structure
A Vector Addition Example
Device Memories and Data Transfer
Kernel Functions and Threading
More on CUDA Thread Organization
Mapping Threads to Multidimensional Data
Synchronization and Transparent Scalability
Assigning Resources to Blocks
CUDA Streams and Task Parallelism
Summary
OpenCL: the Open Computing Language—T Mattson
The Language of Computing and OpenCL
Base Definitions
Computers, Programming, and Heterogeneity
The Birth of OpenCL
OpenCL’s Core Models
16.5.1 Platform Model
16.5.2 Execution Model
16.5.3 Memory Model
16.5.4 Programming Models
Contents xvii
16.6 OpenCL Host Programs: Vector Addition Example 414
16.7 Closing Thoughts 425
References 429
|
any_adam_object | 1 |
author_GND | (DE-588)1048422852 |
building | Verbundindex |
bvnumber | BV043452443 |
callnumber-first | Q - Science |
callnumber-label | QA76 |
callnumber-raw | QA76.58 |
callnumber-search | QA76.58 |
callnumber-sort | QA 276.58 |
callnumber-subject | QA - Mathematics |
classification_rvk | ST 151 |
classification_tum | DAT 516f |
ctrlnum | (OCoLC)933836376 (DE-599)BVBBV043452443 |
dewey-full | 004/.35 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 004 - Computer science |
dewey-raw | 004/.35 |
dewey-search | 004/.35 |
dewey-sort | 14 235 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02331nam a2200529 c 4500</leader><controlfield tag="001">BV043452443</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20190102 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">160311s2015 xxua||| |||| 00||| eng d</controlfield><datafield tag="010" ind1=" " ind2=" "><subfield code="a">015039693</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9780262528818</subfield><subfield code="9">978-0-262-52881-8</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">0262528819</subfield><subfield code="9">0-262-52881-9</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)933836376</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV043452443</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="044" ind1=" " ind2=" "><subfield code="a">xxu</subfield><subfield code="c">US</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-739</subfield><subfield code="a">DE-523</subfield><subfield code="a">DE-634</subfield><subfield code="a">DE-91</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">QA76.58</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">004/.35</subfield><subfield code="2">23</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 151</subfield><subfield code="0">(DE-625)143595:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">DAT 516f</subfield><subfield code="2">stub</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Programming models for parallel computing</subfield><subfield code="c">edited by Pavan Balaji</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Cambridge, Massachusetts</subfield><subfield code="b">The MIT Press</subfield><subfield code="c">[2015]</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">© 2015</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xxv, 458 Seiten</subfield><subfield code="b">Illustrationen</subfield><subfield code="c">23 cm</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Includes bibliographical references (pages 429-458)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Parallel processing (Electronic computers)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Parallel programs (Computer programs)</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Programmierung</subfield><subfield code="0">(DE-588)4076370-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Parallelverarbeitung</subfield><subfield code="0">(DE-588)4075860-6</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Parallelrechner</subfield><subfield code="0">(DE-588)4173280-7</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Parallelverarbeitung</subfield><subfield code="0">(DE-588)4075860-6</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Programmierung</subfield><subfield code="0">(DE-588)4076370-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="689" ind1="1" ind2="0"><subfield code="a">Parallelrechner</subfield><subfield code="0">(DE-588)4173280-7</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2="1"><subfield code="a">Programmierung</subfield><subfield code="0">(DE-588)4076370-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Balaji, Pavan</subfield><subfield code="e">Sonstige</subfield><subfield code="0">(DE-588)1048422852</subfield><subfield code="4">oth</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Druck-Ausgabe</subfield><subfield code="z">978-0-262-33224-8</subfield><subfield code="w">(DE-604)BV044032171</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Passau - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=028869822&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Passau - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=028869822&sequence=000002&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Klappentext</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-028869822</subfield></datafield></record></collection> |
id | DE-604.BV043452443 |
illustrated | Illustrated |
indexdate | 2024-07-10T07:26:12Z |
institution | BVB |
isbn | 9780262528818 0262528819 |
language | English |
lccn | 015039693 |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-028869822 |
oclc_num | 933836376 |
open_access_boolean | |
owner | DE-739 DE-523 DE-634 DE-91 DE-BY-TUM |
owner_facet | DE-739 DE-523 DE-634 DE-91 DE-BY-TUM |
physical | xxv, 458 Seiten Illustrationen 23 cm |
publishDate | 2015 |
publishDateSearch | 2015 |
publishDateSort | 2015 |
publisher | The MIT Press |
record_format | marc |
spelling | Programming models for parallel computing edited by Pavan Balaji Cambridge, Massachusetts The MIT Press [2015] © 2015 xxv, 458 Seiten Illustrationen 23 cm txt rdacontent n rdamedia nc rdacarrier Includes bibliographical references (pages 429-458) Parallel processing (Electronic computers) Parallel programs (Computer programs) Programmierung (DE-588)4076370-5 gnd rswk-swf Parallelverarbeitung (DE-588)4075860-6 gnd rswk-swf Parallelrechner (DE-588)4173280-7 gnd rswk-swf Parallelverarbeitung (DE-588)4075860-6 s Programmierung (DE-588)4076370-5 s DE-604 Parallelrechner (DE-588)4173280-7 s Balaji, Pavan Sonstige (DE-588)1048422852 oth Erscheint auch als Druck-Ausgabe 978-0-262-33224-8 (DE-604)BV044032171 Digitalisierung UB Passau - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=028869822&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis Digitalisierung UB Passau - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=028869822&sequence=000002&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA Klappentext |
spellingShingle | Programming models for parallel computing Parallel processing (Electronic computers) Parallel programs (Computer programs) Programmierung (DE-588)4076370-5 gnd Parallelverarbeitung (DE-588)4075860-6 gnd Parallelrechner (DE-588)4173280-7 gnd |
subject_GND | (DE-588)4076370-5 (DE-588)4075860-6 (DE-588)4173280-7 |
title | Programming models for parallel computing |
title_auth | Programming models for parallel computing |
title_exact_search | Programming models for parallel computing |
title_full | Programming models for parallel computing edited by Pavan Balaji |
title_fullStr | Programming models for parallel computing edited by Pavan Balaji |
title_full_unstemmed | Programming models for parallel computing edited by Pavan Balaji |
title_short | Programming models for parallel computing |
title_sort | programming models for parallel computing |
topic | Parallel processing (Electronic computers) Parallel programs (Computer programs) Programmierung (DE-588)4076370-5 gnd Parallelverarbeitung (DE-588)4075860-6 gnd Parallelrechner (DE-588)4173280-7 gnd |
topic_facet | Parallel processing (Electronic computers) Parallel programs (Computer programs) Programmierung Parallelverarbeitung Parallelrechner |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=028869822&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=028869822&sequence=000002&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT balajipavan programmingmodelsforparallelcomputing |