A survey of direct methods for sparse linear systems Academic Article uri icon

abstract

  • Wilkinson defined a sparse matrix as one with enough zeros that it pays to take advantage of them.1This informal yet practical definition captures the essence of the goal of direct methods for solving sparse matrix problems. They exploit the sparsity of a matrix to solve problems economically: much faster and using far less memory than if all the entries of a matrix were stored and took part in explicit computations. These methods form the backbone of a wide range of problems in computational science. A glimpse of the breadth of applications relying on sparse solvers can be seen in the origins of matrices in published matrix benchmark collections (Duff and Reid 1979a, Duff, Grimes and Lewis 1989a, Davis and Hu 2011). The goal of this survey article is to impart a working knowledge of the underlying theory and practice of sparse direct methods for solving linear systems and least-squares problems, and to provide an overview of the algorithms, data structures, and software available to solve these problems, so that the reader can both understand the methods and know how best to use them.

published proceedings

  • ACTA NUMERICA

altmetric score

  • 8.008

author list (cited authors)

  • Davis, T. A., Rajamanickam, S., & Sid-Lakhdar, W. M.

citation count

  • 122

complete list of authors

  • Davis, Timothy A||Rajamanickam, Sivasankaran||Sid-Lakhdar, Wissam M

publication date

  • May 2016