A Multilevel Subtree Method for Single and Batched Sparse Cholesky Factorization
Conference Paper
Overview
Research
Identity
Additional Document Info
Other
View All
Overview
abstract
2018 Copyright held by the owner/author(s). Publication rights licensed to ACM. Scientific computing relies heavily on matrix factorization. Cholesky factorization is typically used to solve the linear equation system Ax = b where A is symmetric and positive definite. A large number of applications require operating on sparse matrices. A major overhead with factorization of sparse matrices on GPUs is addressing the cost of transferring the data from the CPU to the GPU. Additionally, the computational efficiency of factorization of small dense matrices has to be addressed. In this paper, we develop a multilevel subtree method for Cholesky factorization of large sparse matrices on single and multiple GPUs. This approach effectively addresses two important limitations of previous methods. First, by applying the subtree method to both lower levels and higher levels of the elimination tree, we improve the amount of concurrency and the computational efficiency. Previous approaches only used the subtree method at the lower levels. Second, we overlap computation of a subtree with another subtree, thereby reducing the overhead of the data transfer from CPU to GPU. Additionally, we propose the use of batched parallelism for applications that require simultaneous factorization of multiple matrices. Effectively, the tree structure of a collection of matrices can be derived by merging the individual trees. Our experimental results show that each of the three techniques result in significant performance improvement. Further, the combination of the three can result in a speedup of up to 2.43 on a variety of sparse matrices.
name of conference
Proceedings of the 47th International Conference on Parallel Processing