Instance-optimality in probability with an 1-minimization decoder
Academic Article

Overview

Identity

Additional Document Info

Other

View All

Overview

abstract

Let (), , be a family of n N random matrices whose entries {symbol}i, j are independent realizations of a symmetric, real random variable with expectation E = 0 and variance E 2 = 1 / n. Such matrices are used in compressed sensing to encode a vector x RN by y = x. The information y holds about x is extracted by using a decoder : Rn RN. The most prominent decoder is the 1-minimization decoder which gives for a given y Rn the element (y) RN which has minimal 1-norm among all z RN with z = y. This paper is interested in properties of the random family () which guarantee that the vector over(x, ) : = ( x) will with high probability approximate x in 2N to an accuracy comparable with the best k-term error of approximation in 2N for the range k a n / log2 (N / n). This means that for the above range of k, for each signal x RN, the vector over(x, ) : = ( x) satisfies{norm of matrix} x - over(x, ) {norm of matrix}2N C under(inf, z k) {norm of matrix} x - z {norm of matrix}2N with high probability on the draw of . Here, k consists of all vectors with at most k nonzero coordinates. The first result of this type was proved by Wojtaszczyk [P. Wojtaszczyk, Stability and instance optimality for Gaussian measurements in compressed sensing, Found. Comput. Math., in press] who showed this property when is a normalized Gaussian random variable. We extend this property to more general random variables, including the particular case where is the Bernoulli random variable which takes the values 1 / sqrt(n) with equal probability. The proofs of our results use geometric mapping properties of such random matrices some of which were recently obtained in [A. Litvak, A. Pajor, M. Rudelson, N. Tomczak-Jaegermann, Smallest singular value of random matrices and geometry of random polytopes, Adv. Math. 195 (2005) 491-523]. 2009 Elsevier Inc. All rights reserved.