Weighted l1 minimization techniques for compressed sensing and their applications

Date created: 
Compressed sensing
Weighted l1 minimization
High-dimensional function approximation
Parametric DE
Multiple measurement vector problem

Compressed sensing (CS) provides effective techniques for the recovery of a sparse vector from a small number of measurements by finding a solution to an underdetermined linear system. In recent years, CS has attracted substantial attention in applied mathematics, computer science and electrical engineering, and it has the potential to improve many applications, such as medical imaging and function approximation. One standard technique for solving the CS problem is l1 minimization; however, the performance of l1 minimization might be limited for many practical applications. Hence, in the past few years, there are many investigations into how to modify the l1 minimization approach so that better performance can be achieved. One such approach is weighted l1 minimization. In this thesis, we extend the weighted l1 minimization technique, traditionally used to solve the standard CS problem, to other applications. First, we develop a variance-based joint sparse (VBJS) algorithm based on weighted l1 minimization to solve the multiple measurement vector (MMV) problem. Unlike the standard l2,1 minimization method for this problem, the VBJS method is easily parallelizable. Moreover, we observe through various numerical experiments that the VBJS method often uses fewer measurements to reach the same accuracy as the l2,1 minimization method. Second, we apply weighted l1 minimization to the high-dimensional function approximation problem, focusing on the case of gradient-augmented measurements. The high-dimensional function approximation problem has many applications, including uncertainty quantification (UQ), where it arises in the task of approximating a quantity of interest (QoI) of a parametric differential equation (DE). For a fixed amount of computational cost, we see in various examples that, with additional gradient information, better approximation results are often achieved compared to non-gradient augmented sampling. Theoretically, we prove that, with the same sample complexity as the case of function samples only, the gradient-augmented problem gives a better error bound in a stronger Sobolev norm as opposed to an L2 norm. Finally, we use the adjoint sensitivity analysis method to compute the gradient information. As we show, this method computes the gradient samples of the QoI of a parametric DE with around the same computational cost as computing the samples of the QoI itself. We apply this approach to several parametric DE problems, and numerically demonstrate the benefits of incorporating gradient information into the approximation procedure.

Document type: 
This thesis may be printed or downloaded for non-commercial research and scholarly purposes. Copyright remains with the author.
Senior supervisor: 
Ben Adcock
Science: Department of Mathematics
Thesis type: 
(Thesis) Ph.D.