The complexity of parallel computations:
Recent advances in microelectronics have brought closer to feasibility the construction of computers containing thousands (or more) of processing elements. This thesis addresses the question of effective utilization of such processing power. We study the computational complexity of synchronous paara...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Ithaca, New York
1979
|
Schriftenreihe: | Cornell University <Ithaca, NY> / Department of Computer Science: Technical report
387 |
Schlagworte: | |
Zusammenfassung: | Recent advances in microelectronics have brought closer to feasibility the construction of computers containing thousands (or more) of processing elements. This thesis addresses the question of effective utilization of such processing power. We study the computational complexity of synchronous paarallel computations using a model of computation based on random access machines operating in parallel and sharing a common memory, the P-RAM. Two main areas within the field of parallel computational complexity are investigated. First, we explore the power of the P-RAM model viewed as an abstract computing device. Later, we study techniques for developing efficient algorithms for parallel computers We are able to give concise characterizations of the power of deterministic and nondeterministic P-RAMS in terms of the more widely known space and time complexity classes for multi-tape Turing machines. Roughly speaking, time-bounded deterministic P-RAMS are equivalent in power to (can accept the same sets as) space-bounded Turing machines, where the time and space bounds differ by at most a polynomial. In the context of comparing models of computation, we consider such polynomial differences in resources to be insignificant. Adding the feature of nondeterminism to the time-bounded P-RAM changes its power to that of a nondeterministic Turing machine with an exponentially higher running time The later sections of the thesis examine algorithm design techniques for parallel computers. We first develop efficient procedures for some common operations on linked lists and arrays. Given this background, we introduce three techniques that permit the design of parallel algorithms that are efficient in terms of both their time and processor requirements. We illustrate the use of these techniques by presenting time and processor efficient algorithms for three problems, in each case improving upon the best previously known parallel resource bounds. We show how to compute minimum string edit distances, using the technique of pairwise function composition. We describe an algorithm for the off-line MIN that organizes its computation in the form of a complete binary tree. Finally, we present an algorithm for undirected graph connectivity that relies on redundancy in its representation of the input graph |
Beschreibung: | Zugl.: Ithaca, NY, Cornell Univ., Diss |
Beschreibung: | 117 Sp. |
Internformat
MARC
LEADER | 00000nam a2200000 cb4500 | ||
---|---|---|---|
001 | BV009892447 | ||
003 | DE-604 | ||
005 | 00000000000000.0 | ||
007 | t | ||
008 | 941109s1979 m||| 00||| engod | ||
035 | |a (OCoLC)6062672 | ||
035 | |a (DE-599)BVBBV009892447 | ||
040 | |a DE-604 |b ger |e rakddb | ||
041 | 0 | |a eng | |
049 | |a DE-91G | ||
082 | 0 | |a 621.3819544 |b W98c | |
100 | 1 | |a Wyllie, James C. |e Verfasser |4 aut | |
245 | 1 | 0 | |a The complexity of parallel computations |
264 | 1 | |a Ithaca, New York |c 1979 | |
300 | |a 117 Sp. | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
490 | 1 | |a Cornell University <Ithaca, NY> / Department of Computer Science: Technical report |v 387 | |
500 | |a Zugl.: Ithaca, NY, Cornell Univ., Diss | ||
520 | 3 | |a Recent advances in microelectronics have brought closer to feasibility the construction of computers containing thousands (or more) of processing elements. This thesis addresses the question of effective utilization of such processing power. We study the computational complexity of synchronous paarallel computations using a model of computation based on random access machines operating in parallel and sharing a common memory, the P-RAM. Two main areas within the field of parallel computational complexity are investigated. First, we explore the power of the P-RAM model viewed as an abstract computing device. Later, we study techniques for developing efficient algorithms for parallel computers | |
520 | 3 | |a We are able to give concise characterizations of the power of deterministic and nondeterministic P-RAMS in terms of the more widely known space and time complexity classes for multi-tape Turing machines. Roughly speaking, time-bounded deterministic P-RAMS are equivalent in power to (can accept the same sets as) space-bounded Turing machines, where the time and space bounds differ by at most a polynomial. In the context of comparing models of computation, we consider such polynomial differences in resources to be insignificant. Adding the feature of nondeterminism to the time-bounded P-RAM changes its power to that of a nondeterministic Turing machine with an exponentially higher running time | |
520 | 3 | |a The later sections of the thesis examine algorithm design techniques for parallel computers. We first develop efficient procedures for some common operations on linked lists and arrays. Given this background, we introduce three techniques that permit the design of parallel algorithms that are efficient in terms of both their time and processor requirements. We illustrate the use of these techniques by presenting time and processor efficient algorithms for three problems, in each case improving upon the best previously known parallel resource bounds. We show how to compute minimum string edit distances, using the technique of pairwise function composition. We describe an algorithm for the off-line MIN that organizes its computation in the form of a complete binary tree. Finally, we present an algorithm for undirected graph connectivity that relies on redundancy in its representation of the input graph | |
650 | 7 | |a Multiprocesseurs |2 Rameau | |
650 | 7 | |a Parallélisme (informatique) |2 Rameau | |
650 | 4 | |a Multiprocessors | |
650 | 4 | |a Parallel processing (Electronic computers) | |
655 | 7 | |0 (DE-588)4113937-9 |a Hochschulschrift |2 gnd-content | |
810 | 2 | |a Department of Computer Science: Technical report |t Cornell University <Ithaca, NY> |v 387 |w (DE-604)BV006185504 |9 387 | |
999 | |a oai:aleph.bib-bvb.de:BVB01-006550103 |
Datensatz im Suchindex
_version_ | 1804124250596966400 |
---|---|
any_adam_object | |
author | Wyllie, James C. |
author_facet | Wyllie, James C. |
author_role | aut |
author_sort | Wyllie, James C. |
author_variant | j c w jc jcw |
building | Verbundindex |
bvnumber | BV009892447 |
ctrlnum | (OCoLC)6062672 (DE-599)BVBBV009892447 |
dewey-full | 621.3819544 |
dewey-hundreds | 600 - Technology (Applied sciences) |
dewey-ones | 621 - Applied physics |
dewey-raw | 621.3819544 |
dewey-search | 621.3819544 |
dewey-sort | 3621.3819544 |
dewey-tens | 620 - Engineering and allied operations |
discipline | Elektrotechnik / Elektronik / Nachrichtentechnik |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>03566nam a2200385 cb4500</leader><controlfield tag="001">BV009892447</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">00000000000000.0</controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">941109s1979 m||| 00||| engod</controlfield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)6062672</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV009892447</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rakddb</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-91G</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">621.3819544</subfield><subfield code="b">W98c</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wyllie, James C.</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">The complexity of parallel computations</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Ithaca, New York</subfield><subfield code="c">1979</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">117 Sp.</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">Cornell University <Ithaca, NY> / Department of Computer Science: Technical report</subfield><subfield code="v">387</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Zugl.: Ithaca, NY, Cornell Univ., Diss</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">Recent advances in microelectronics have brought closer to feasibility the construction of computers containing thousands (or more) of processing elements. This thesis addresses the question of effective utilization of such processing power. We study the computational complexity of synchronous paarallel computations using a model of computation based on random access machines operating in parallel and sharing a common memory, the P-RAM. Two main areas within the field of parallel computational complexity are investigated. First, we explore the power of the P-RAM model viewed as an abstract computing device. Later, we study techniques for developing efficient algorithms for parallel computers</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">We are able to give concise characterizations of the power of deterministic and nondeterministic P-RAMS in terms of the more widely known space and time complexity classes for multi-tape Turing machines. Roughly speaking, time-bounded deterministic P-RAMS are equivalent in power to (can accept the same sets as) space-bounded Turing machines, where the time and space bounds differ by at most a polynomial. In the context of comparing models of computation, we consider such polynomial differences in resources to be insignificant. Adding the feature of nondeterminism to the time-bounded P-RAM changes its power to that of a nondeterministic Turing machine with an exponentially higher running time</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">The later sections of the thesis examine algorithm design techniques for parallel computers. We first develop efficient procedures for some common operations on linked lists and arrays. Given this background, we introduce three techniques that permit the design of parallel algorithms that are efficient in terms of both their time and processor requirements. We illustrate the use of these techniques by presenting time and processor efficient algorithms for three problems, in each case improving upon the best previously known parallel resource bounds. We show how to compute minimum string edit distances, using the technique of pairwise function composition. We describe an algorithm for the off-line MIN that organizes its computation in the form of a complete binary tree. Finally, we present an algorithm for undirected graph connectivity that relies on redundancy in its representation of the input graph</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Multiprocesseurs</subfield><subfield code="2">Rameau</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Parallélisme (informatique)</subfield><subfield code="2">Rameau</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multiprocessors</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Parallel processing (Electronic computers)</subfield></datafield><datafield tag="655" ind1=" " ind2="7"><subfield code="0">(DE-588)4113937-9</subfield><subfield code="a">Hochschulschrift</subfield><subfield code="2">gnd-content</subfield></datafield><datafield tag="810" ind1="2" ind2=" "><subfield code="a">Department of Computer Science: Technical report</subfield><subfield code="t">Cornell University <Ithaca, NY></subfield><subfield code="v">387</subfield><subfield code="w">(DE-604)BV006185504</subfield><subfield code="9">387</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-006550103</subfield></datafield></record></collection> |
genre | (DE-588)4113937-9 Hochschulschrift gnd-content |
genre_facet | Hochschulschrift |
id | DE-604.BV009892447 |
illustrated | Not Illustrated |
indexdate | 2024-07-09T17:42:41Z |
institution | BVB |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-006550103 |
oclc_num | 6062672 |
open_access_boolean | |
owner | DE-91G DE-BY-TUM |
owner_facet | DE-91G DE-BY-TUM |
physical | 117 Sp. |
publishDate | 1979 |
publishDateSearch | 1979 |
publishDateSort | 1979 |
record_format | marc |
series2 | Cornell University <Ithaca, NY> / Department of Computer Science: Technical report |
spelling | Wyllie, James C. Verfasser aut The complexity of parallel computations Ithaca, New York 1979 117 Sp. txt rdacontent n rdamedia nc rdacarrier Cornell University <Ithaca, NY> / Department of Computer Science: Technical report 387 Zugl.: Ithaca, NY, Cornell Univ., Diss Recent advances in microelectronics have brought closer to feasibility the construction of computers containing thousands (or more) of processing elements. This thesis addresses the question of effective utilization of such processing power. We study the computational complexity of synchronous paarallel computations using a model of computation based on random access machines operating in parallel and sharing a common memory, the P-RAM. Two main areas within the field of parallel computational complexity are investigated. First, we explore the power of the P-RAM model viewed as an abstract computing device. Later, we study techniques for developing efficient algorithms for parallel computers We are able to give concise characterizations of the power of deterministic and nondeterministic P-RAMS in terms of the more widely known space and time complexity classes for multi-tape Turing machines. Roughly speaking, time-bounded deterministic P-RAMS are equivalent in power to (can accept the same sets as) space-bounded Turing machines, where the time and space bounds differ by at most a polynomial. In the context of comparing models of computation, we consider such polynomial differences in resources to be insignificant. Adding the feature of nondeterminism to the time-bounded P-RAM changes its power to that of a nondeterministic Turing machine with an exponentially higher running time The later sections of the thesis examine algorithm design techniques for parallel computers. We first develop efficient procedures for some common operations on linked lists and arrays. Given this background, we introduce three techniques that permit the design of parallel algorithms that are efficient in terms of both their time and processor requirements. We illustrate the use of these techniques by presenting time and processor efficient algorithms for three problems, in each case improving upon the best previously known parallel resource bounds. We show how to compute minimum string edit distances, using the technique of pairwise function composition. We describe an algorithm for the off-line MIN that organizes its computation in the form of a complete binary tree. Finally, we present an algorithm for undirected graph connectivity that relies on redundancy in its representation of the input graph Multiprocesseurs Rameau Parallélisme (informatique) Rameau Multiprocessors Parallel processing (Electronic computers) (DE-588)4113937-9 Hochschulschrift gnd-content Department of Computer Science: Technical report Cornell University <Ithaca, NY> 387 (DE-604)BV006185504 387 |
spellingShingle | Wyllie, James C. The complexity of parallel computations Multiprocesseurs Rameau Parallélisme (informatique) Rameau Multiprocessors Parallel processing (Electronic computers) |
subject_GND | (DE-588)4113937-9 |
title | The complexity of parallel computations |
title_auth | The complexity of parallel computations |
title_exact_search | The complexity of parallel computations |
title_full | The complexity of parallel computations |
title_fullStr | The complexity of parallel computations |
title_full_unstemmed | The complexity of parallel computations |
title_short | The complexity of parallel computations |
title_sort | the complexity of parallel computations |
topic | Multiprocesseurs Rameau Parallélisme (informatique) Rameau Multiprocessors Parallel processing (Electronic computers) |
topic_facet | Multiprocesseurs Parallélisme (informatique) Multiprocessors Parallel processing (Electronic computers) Hochschulschrift |
volume_link | (DE-604)BV006185504 |
work_keys_str_mv | AT wylliejamesc thecomplexityofparallelcomputations |