Sparse inverse covariance selection is a powerful tool for estimating sparse graphs in statistical learning, whose mathematical model is to maximize the regularized log-likelihood function. In this thesis, we consider introducing some non-convex regularizers to this problem. In particular, we first propose several non-convex regularized maximum likelihood estimation models. Using the specific structure of those regularizers, we then develop a DC programming approach for solving these models. We show that each subproblem of this approach is a weighted graphical lasso problem, and we propose a warm-started alternating direction method of multipliers to solve each subproblem. In addition, we propose a decomposition scheme by extending the exact covariance thresholding technique for graphical lasso to our general non-convex model, which enables us to solve large-scale problems efficiently. Finally, we compare the performance of our approach with some existing approaches on both randomly generated and real-life instances, and report some promising computational results.
Copyright is held by the author.
The author granted permission for the file to be printed, but not for the text to be copied and pasted.
Supervisor or Senior Supervisor
Thesis advisor: Lu, Zhaosong
Member of collection