Scaling shared-bus multiprocessors with multiple busses and shared caches: a performance study

Abstract: "The main limitation of shared-bus multiprocessors is that the common bus tends to be the primary source for contention, and thus imposes a limit on the number of processors in the system. Alternate architectural features are necessary to reduce the memory bandwidth demands and to inc...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Bertoni, Jonathan (VerfasserIn), Baer, Jean-Loup (VerfasserIn), Wang, Wen-Hann (VerfasserIn)
Format: Buch
Sprache:English
Veröffentlicht: Seattle, Wash. 1991
Schriftenreihe:University of Washington <Seattle, Wash.> / Department of Computer Science: Technical report 91,9,4
Schlagworte:
Zusammenfassung:Abstract: "The main limitation of shared-bus multiprocessors is that the common bus tends to be the primary source for contention, and thus imposes a limit on the number of processors in the system. Alternate architectural features are necessary to reduce the memory bandwidth demands and to increase the bus bandwidth. In this paper, we investigate the cost-performance effects of two enhancements: higher bus transaction rates, e.g., through the use of multiple busses, and shared two-level caches. The performance figures are obtained via simulation with loads derived from traces of real applications, some of which show a significant skew in the distribution of memory bank access. A new multiple bus scheme, called multiple interleaved busses, is described and analyzed
This scheme is a generalization of previous approaches, and attempts to balance performance and cost tradeoffs in a snoopy-cache multiprocessor environment. The results from simulation show that multiple interleaved busses perform almost as well as multiple independent busses, but with simpler and less costly implementation. Furthermore, multiple interleaved busses are shown to deliver much better performance than interleaved busses when the skew of accesses across the interleaves is large. Shared second-level caches have been shown to be very effective in the design space under consideration. Such systems might offer considerable implementation economies with relatively small design cost
We show that depending on the design point in question, bus operation buffers might be useful in shared second level caches by reducing the effects of high skew and greater multiprocessing level. With the presence of these buffers, the uses of shared caches resulted in only a small throughput degradation.
Beschreibung:22 S.

Es ist kein Print-Exemplar vorhanden.

Fernleihe Bestellen Achtung: Nicht im THWS-Bestand!