Shared memory consistency conditions for non-sequential execution: definitions and programming strategies
Conference Paper
Overview
Research
Identity
Additional Document Info
Other
View All
Overview
abstract
1993 ACM. To enhance performance on shared memory multiprocessors, various techniques have been proposed to reduce the latency of memory accesses, including pipelining of accesses, out-of-order execution of accesses, and branch prediction with speculative execution. These optimizations however can complicate the user's model of memory. This paper attacks the problem of simplifying programming on two fronts. First, a general framework is presented for defining shared memory consistency conditions that allows non-sequential execution of memory accesses. The interface at which conditions are defined is between the program and the system, and is architecture-independent. The framework is used to generalize four known consistency conditions-sequential consistency, hybrid consistency, weak consistency, and release consistency-for non-sequential execution. Second, several techniques are described for structuring programs so that a shared memory that provides the weaker (and more efficient) condition of hybrid consistency appears to guarantee the stronger (and more costly) condition of sequential consistency. The benefit is that sequentially consistent executions are easier to reason about. The first and second techniques statically classify accesses based on their type. This approach is extremely simple to use and leads to a general methodology for writing efficient synchronization code. The third technique is to avoid data races in the program; this technique also works on a simple variant of release consistent hardware, with an appropriate change to the definition of data race.
name of conference
Proceedings of the fifth annual ACM symposium on Parallel algorithms and architectures - SPAA '93