Parallel incremental compilation:

Abstract: "The time it takes to compile a large program has been a bottleneck in the software development process. When an interactive programming environment with an incremental compiler is used, compilation speed becomes even more important, but existing incremental compilers are very slow fo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Gafter, Neal M. (VerfasserIn)
Format: Buch
Sprache:English
Veröffentlicht: Rochester, NY 1990
Schriftenreihe:University of Rochester <Rochester, NY> / Department of Computer Science: Technical report 349
Schlagworte:
Zusammenfassung:Abstract: "The time it takes to compile a large program has been a bottleneck in the software development process. When an interactive programming environment with an incremental compiler is used, compilation speed becomes even more important, but existing incremental compilers are very slow for some types of program changes. We describe a set of techniques that enable incremental compilation to exploit fine-grained concurrency in a shared-memory multiprocessor and achieve asymptotic improvement over sequential algorithms. Because parallel non-incremental compilation is a special case of parallel incremental compilation, the design of a parallel compiler is a corollary of our result
Instead of running the individual phases concurrently, our design specifies compiler phases that are mutually sequential. However, each phase is designed to exploit fine-grained parallelism. By allowing each phase to present its output as a complete structure rather than as a stream of data, we can apply techniques such as parallel prefix and parallel divide-and-conquer, and we can construct applicative data structures to achieve sublinear execution time. Parallel algorithms for each phase of a compiler are presented to demonstrate that a complete incremental compiler can achieve execution time that is asymptotically less than sequential algorithms
Historically, the symbol table has been a bottleneck to parallel compilation; no previously described algorithm executes in time less than linear in the number of declarations. We describe new algorithms for parsing using a balanced list representation and type checking based upon attribute grammars modified with a combination of aggregate values and upward remote references. Under some mild assumptions about the language and target program, these phases run in polylogarithmic time using a sublinear number of processors. The design of computer languages has been influenced by the compiler technology available; we show how some language design decisions can simplify the design of a parallel incremental compiler, allowing more efficient algorithms to be used.
Beschreibung:Zugl.: Rochester, NY, Univ., Diss.
Beschreibung:IX, 113 S.

Es ist kein Print-Exemplar vorhanden.

Fernleihe Bestellen Achtung: Nicht im THWS-Bestand!