Synopsis for Introduction to Compressed Sensing and Matrix Completion
Number of lectures: 12 TT
Syllabus
Lectures 1-4: Basics of sparse approximation and compressed sensing
- Representations for compression, introduction to wavelets and their higher dimensional extensions.
- Introduction to compressed sensing, and contrast with basis pursuit. Discuss recovery using one step thresholding, both using coherence for worst case and the average case analysis of Schnass. Connect with applications. Compressed sensing recovery algorithms. Use coherence to show that OMP and L1 also give recovery. Connect coherence with uniqueness, spark, the sqrt(n) bottleneck, and the Dirac comb.
- Introduce RIP2 and RIP1. Use coherence to generate trivial bound using Gershgorin disk theorem. Derive the first RIC bounds of Candes and Tao. State that much better are now known. Use L2 RICs to show IHT converges.
- Use RICs to show that l1 converges. Discuss that there are many algorithms, list a few, and that they can all be shown to have optimal order if RICs are bounded. Introduce polytope perspective. Explain the model, and use the hypercube as a simple first example, deriving results using face projection algorithm, and again using Winder/Cover theorem. Derive l1 phase transitions using polytope perspective. Go through the calculations. Introduce the non-negativity, state results, and then derive uniqueness from Wendel's Theorem, and then connect with mean zero. Introduce model based CS and the associated "non-delta decay to zero".
- Introduce Matrix Completion and contrast with compressed sensing and discuss applications.
- Show that if RICs are bounded that nuclear norm and NIHT are guaranteed to recover the sensed low rank matrix. Discuss entry based sensing in matrix completion, the natural extension of coherence, and recovery guarantees. Extensions to recent advances such as: joint sparsity models, dictionary learning, and the approximate message passing perspective.
Course Description
Much of the information we are interested in has an underlying notion of simplicity that allows the data to be easily compressed; for example, images composed of slowly varying regions separated by few abrupt changes separating objects in the scene. This underlying simplicity allows such information to be efficiently compressed in well known representation such mp3 and jpg. Despite knowing this simplicity is almost surely present, nearly all applications involve careful measurements of the full data set, and compression is only used after the full data set is at hand, typically used to save storage space or allow for further analysis of the data. Compressed sensing established that often the data can be measured at a rate proportional to the compression rate, rather than at the final resolution rate; moreover, this can be done using non-adaptive/learning measurements necessary in most applications. This notion can be extended further to the cases where the representation in which the data is simple isn't known and/or where it is impossible to measure the complete data set. For example, matrices that are of low rank can be recovered from only a small fraction of their entries; applications of matrix completion include online recommending systems where only a small fraction of customer preferences for products are known, and completing the associated matrix allows prediction of the which products a customer may like or dislike. This course will cover the development of compressed sensing and matrix completion, focusing on the design and analysis of algorithms to recover data from its compressed and/or incomplete measurements.
Reading List
Reading will be assigned from the following, and, where appropriate, notes provided by the lecturer.
- Michael Elad (2010) Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing, Springer.
- Simon Foucart and Holger Rauhut, A Mathematical Introduction to Compressive Sensing.
- Alfred M. Bruckstein, David L. Donoho, and Michael Elad (2009) From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images, SIAM Review, Vol. 51, No. 1, Pages 34-81.
- Benjamin Recht, Maryam Fazel, and Pablo A. Parrilo (2010) Guaranteed Minimum Rank Solutions to Linear Matrix Equations via Nuclear Norm Minimization, SIAM Review. Vol 52, no 3, pages 471-501.
Last updated by Jared Tanner on Thu, 16/05/2013 - 10:31am.
This page is maintained by Kathryn Gillow. Please use the contact form for feedback and comments.
This page is maintained by Kathryn Gillow. Please use the contact form for feedback and comments.
