Run-time parallelization: Its time has come Academic Article uri icon

abstract

  • Current parallelizing compilers cannot identify a significant fraction of parallelizable loops because they have complex or statically insufficiently defined access patterns. This type of loop mostly occurs in irregular, dynamic applications which represent more than 50% of all applications [K. Kennedy, Compiler technology for machine-independent programming, Int. J. Paral. Prog. 22 (1) (1994) 79-98]. Making parallel computing succeed has therefore become conditioned by the ability of compilers to analyze and extract the parallelism from irregular applications. In this paper we present a survey of techniques that can complement the current compiler capabilities by performing some form of data dependence analysis during program execution, when all information is available. After describing the problem of loop parallelization and its difficulties, a general overview of the need for techniques of run-time parallelization is given. A survey of the various approaches to parallelizing partially parallel loops and fully parallel loops is presented. Special emphasis is placed on two parallelism enabling transformations, privatization and reduction parallelization, because of their proven efficiency. The technique of speculatively parallelizing doall loops is presented in more detail. This survey limits itself to the domain of Fortran applications parallelized mostly in the shared mory paradigm. Related work from the field of parallel debugging and parallel simulation is also described. 1998 Elsevier Science B.V. All rights reserved.

published proceedings

  • PARALLEL COMPUTING

author list (cited authors)

  • Rauchwerger, L.

citation count

  • 24

complete list of authors

  • Rauchwerger, L

publication date

  • May 1998