Parallel incremental compilation:
Abstract: "The time it takes to compile a large program has been a bottleneck in the software development process. When an interactive programming environment with an incremental compiler is used, compilation speed becomes even more important, but existing incremental compilers are very slow fo...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Rochester, NY
1990
|
Schriftenreihe: | University of Rochester <Rochester, NY> / Department of Computer Science: Technical report
349 |
Schlagworte: | |
Zusammenfassung: | Abstract: "The time it takes to compile a large program has been a bottleneck in the software development process. When an interactive programming environment with an incremental compiler is used, compilation speed becomes even more important, but existing incremental compilers are very slow for some types of program changes. We describe a set of techniques that enable incremental compilation to exploit fine-grained concurrency in a shared-memory multiprocessor and achieve asymptotic improvement over sequential algorithms. Because parallel non-incremental compilation is a special case of parallel incremental compilation, the design of a parallel compiler is a corollary of our result Instead of running the individual phases concurrently, our design specifies compiler phases that are mutually sequential. However, each phase is designed to exploit fine-grained parallelism. By allowing each phase to present its output as a complete structure rather than as a stream of data, we can apply techniques such as parallel prefix and parallel divide-and-conquer, and we can construct applicative data structures to achieve sublinear execution time. Parallel algorithms for each phase of a compiler are presented to demonstrate that a complete incremental compiler can achieve execution time that is asymptotically less than sequential algorithms Historically, the symbol table has been a bottleneck to parallel compilation; no previously described algorithm executes in time less than linear in the number of declarations. We describe new algorithms for parsing using a balanced list representation and type checking based upon attribute grammars modified with a combination of aggregate values and upward remote references. Under some mild assumptions about the language and target program, these phases run in polylogarithmic time using a sublinear number of processors. The design of computer languages has been influenced by the compiler technology available; we show how some language design decisions can simplify the design of a parallel incremental compiler, allowing more efficient algorithms to be used. |
Beschreibung: | Zugl.: Rochester, NY, Univ., Diss. |
Beschreibung: | IX, 113 S. |
Internformat
MARC
LEADER | 00000nam a2200000 cb4500 | ||
---|---|---|---|
001 | BV008950527 | ||
003 | DE-604 | ||
005 | 00000000000000.0 | ||
007 | t | ||
008 | 940206s1990 m||| 00||| eng d | ||
035 | |a (OCoLC)24875379 | ||
035 | |a (DE-599)BVBBV008950527 | ||
040 | |a DE-604 |b ger |e rakddb | ||
041 | 0 | |a eng | |
049 | |a DE-29T | ||
100 | 1 | |a Gafter, Neal M. |e Verfasser |4 aut | |
245 | 1 | 0 | |a Parallel incremental compilation |
264 | 1 | |a Rochester, NY |c 1990 | |
300 | |a IX, 113 S. | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
490 | 1 | |a University of Rochester <Rochester, NY> / Department of Computer Science: Technical report |v 349 | |
500 | |a Zugl.: Rochester, NY, Univ., Diss. | ||
520 | 3 | |a Abstract: "The time it takes to compile a large program has been a bottleneck in the software development process. When an interactive programming environment with an incremental compiler is used, compilation speed becomes even more important, but existing incremental compilers are very slow for some types of program changes. We describe a set of techniques that enable incremental compilation to exploit fine-grained concurrency in a shared-memory multiprocessor and achieve asymptotic improvement over sequential algorithms. Because parallel non-incremental compilation is a special case of parallel incremental compilation, the design of a parallel compiler is a corollary of our result | |
520 | 3 | |a Instead of running the individual phases concurrently, our design specifies compiler phases that are mutually sequential. However, each phase is designed to exploit fine-grained parallelism. By allowing each phase to present its output as a complete structure rather than as a stream of data, we can apply techniques such as parallel prefix and parallel divide-and-conquer, and we can construct applicative data structures to achieve sublinear execution time. Parallel algorithms for each phase of a compiler are presented to demonstrate that a complete incremental compiler can achieve execution time that is asymptotically less than sequential algorithms | |
520 | 3 | |a Historically, the symbol table has been a bottleneck to parallel compilation; no previously described algorithm executes in time less than linear in the number of declarations. We describe new algorithms for parsing using a balanced list representation and type checking based upon attribute grammars modified with a combination of aggregate values and upward remote references. Under some mild assumptions about the language and target program, these phases run in polylogarithmic time using a sublinear number of processors. The design of computer languages has been influenced by the compiler technology available; we show how some language design decisions can simplify the design of a parallel incremental compiler, allowing more efficient algorithms to be used. | |
650 | 4 | |a Compiling (Electronic computers) | |
650 | 4 | |a Parallel processing (Electronic computers) | |
655 | 7 | |0 (DE-588)4113937-9 |a Hochschulschrift |2 gnd-content | |
810 | 2 | |a Department of Computer Science: Technical report |t University of Rochester <Rochester, NY> |v 349 |w (DE-604)BV008902697 |9 349 | |
999 | |a oai:aleph.bib-bvb.de:BVB01-005906090 |
Datensatz im Suchindex
_version_ | 1804123283932577792 |
---|---|
any_adam_object | |
author | Gafter, Neal M. |
author_facet | Gafter, Neal M. |
author_role | aut |
author_sort | Gafter, Neal M. |
author_variant | n m g nm nmg |
building | Verbundindex |
bvnumber | BV008950527 |
ctrlnum | (OCoLC)24875379 (DE-599)BVBBV008950527 |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>03257nam a2200349 cb4500</leader><controlfield tag="001">BV008950527</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">00000000000000.0</controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">940206s1990 m||| 00||| eng d</controlfield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)24875379</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV008950527</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rakddb</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-29T</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Gafter, Neal M.</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Parallel incremental compilation</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Rochester, NY</subfield><subfield code="c">1990</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">IX, 113 S.</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">University of Rochester <Rochester, NY> / Department of Computer Science: Technical report</subfield><subfield code="v">349</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Zugl.: Rochester, NY, Univ., Diss.</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">Abstract: "The time it takes to compile a large program has been a bottleneck in the software development process. When an interactive programming environment with an incremental compiler is used, compilation speed becomes even more important, but existing incremental compilers are very slow for some types of program changes. We describe a set of techniques that enable incremental compilation to exploit fine-grained concurrency in a shared-memory multiprocessor and achieve asymptotic improvement over sequential algorithms. Because parallel non-incremental compilation is a special case of parallel incremental compilation, the design of a parallel compiler is a corollary of our result</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">Instead of running the individual phases concurrently, our design specifies compiler phases that are mutually sequential. However, each phase is designed to exploit fine-grained parallelism. By allowing each phase to present its output as a complete structure rather than as a stream of data, we can apply techniques such as parallel prefix and parallel divide-and-conquer, and we can construct applicative data structures to achieve sublinear execution time. Parallel algorithms for each phase of a compiler are presented to demonstrate that a complete incremental compiler can achieve execution time that is asymptotically less than sequential algorithms</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">Historically, the symbol table has been a bottleneck to parallel compilation; no previously described algorithm executes in time less than linear in the number of declarations. We describe new algorithms for parsing using a balanced list representation and type checking based upon attribute grammars modified with a combination of aggregate values and upward remote references. Under some mild assumptions about the language and target program, these phases run in polylogarithmic time using a sublinear number of processors. The design of computer languages has been influenced by the compiler technology available; we show how some language design decisions can simplify the design of a parallel incremental compiler, allowing more efficient algorithms to be used.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Compiling (Electronic computers)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Parallel processing (Electronic computers)</subfield></datafield><datafield tag="655" ind1=" " ind2="7"><subfield code="0">(DE-588)4113937-9</subfield><subfield code="a">Hochschulschrift</subfield><subfield code="2">gnd-content</subfield></datafield><datafield tag="810" ind1="2" ind2=" "><subfield code="a">Department of Computer Science: Technical report</subfield><subfield code="t">University of Rochester <Rochester, NY></subfield><subfield code="v">349</subfield><subfield code="w">(DE-604)BV008902697</subfield><subfield code="9">349</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-005906090</subfield></datafield></record></collection> |
genre | (DE-588)4113937-9 Hochschulschrift gnd-content |
genre_facet | Hochschulschrift |
id | DE-604.BV008950527 |
illustrated | Not Illustrated |
indexdate | 2024-07-09T17:27:19Z |
institution | BVB |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-005906090 |
oclc_num | 24875379 |
open_access_boolean | |
owner | DE-29T |
owner_facet | DE-29T |
physical | IX, 113 S. |
publishDate | 1990 |
publishDateSearch | 1990 |
publishDateSort | 1990 |
record_format | marc |
series2 | University of Rochester <Rochester, NY> / Department of Computer Science: Technical report |
spelling | Gafter, Neal M. Verfasser aut Parallel incremental compilation Rochester, NY 1990 IX, 113 S. txt rdacontent n rdamedia nc rdacarrier University of Rochester <Rochester, NY> / Department of Computer Science: Technical report 349 Zugl.: Rochester, NY, Univ., Diss. Abstract: "The time it takes to compile a large program has been a bottleneck in the software development process. When an interactive programming environment with an incremental compiler is used, compilation speed becomes even more important, but existing incremental compilers are very slow for some types of program changes. We describe a set of techniques that enable incremental compilation to exploit fine-grained concurrency in a shared-memory multiprocessor and achieve asymptotic improvement over sequential algorithms. Because parallel non-incremental compilation is a special case of parallel incremental compilation, the design of a parallel compiler is a corollary of our result Instead of running the individual phases concurrently, our design specifies compiler phases that are mutually sequential. However, each phase is designed to exploit fine-grained parallelism. By allowing each phase to present its output as a complete structure rather than as a stream of data, we can apply techniques such as parallel prefix and parallel divide-and-conquer, and we can construct applicative data structures to achieve sublinear execution time. Parallel algorithms for each phase of a compiler are presented to demonstrate that a complete incremental compiler can achieve execution time that is asymptotically less than sequential algorithms Historically, the symbol table has been a bottleneck to parallel compilation; no previously described algorithm executes in time less than linear in the number of declarations. We describe new algorithms for parsing using a balanced list representation and type checking based upon attribute grammars modified with a combination of aggregate values and upward remote references. Under some mild assumptions about the language and target program, these phases run in polylogarithmic time using a sublinear number of processors. The design of computer languages has been influenced by the compiler technology available; we show how some language design decisions can simplify the design of a parallel incremental compiler, allowing more efficient algorithms to be used. Compiling (Electronic computers) Parallel processing (Electronic computers) (DE-588)4113937-9 Hochschulschrift gnd-content Department of Computer Science: Technical report University of Rochester <Rochester, NY> 349 (DE-604)BV008902697 349 |
spellingShingle | Gafter, Neal M. Parallel incremental compilation Compiling (Electronic computers) Parallel processing (Electronic computers) |
subject_GND | (DE-588)4113937-9 |
title | Parallel incremental compilation |
title_auth | Parallel incremental compilation |
title_exact_search | Parallel incremental compilation |
title_full | Parallel incremental compilation |
title_fullStr | Parallel incremental compilation |
title_full_unstemmed | Parallel incremental compilation |
title_short | Parallel incremental compilation |
title_sort | parallel incremental compilation |
topic | Compiling (Electronic computers) Parallel processing (Electronic computers) |
topic_facet | Compiling (Electronic computers) Parallel processing (Electronic computers) Hochschulschrift |
volume_link | (DE-604)BV008902697 |
work_keys_str_mv | AT gafternealm parallelincrementalcompilation |