17 January 2013
The essential information contained in most large data sets is small when compared to the size of the data set. That is, the data can be well approximated using relatively few terms in a suitable transformation. Compressed sensing and matrix completion show that this simplicity in the data can be exploited to reduce the number of measurements. For instance, if a vector of length $N$ can be represented exactly using $k$ terms of a known basis then $2k\log(N/k)$ measurements is typically sufficient to recover the vector exactly. This can result in dramatic time savings when k << N, which is typical in many applications such as medical imaging. As another example consider an $m \times n$ matrix of rank $r$. This class of matrices has $r(m+n-r)$ degrees of freedom. Computationally simple and efficient algorithms are able to recover random rank $r$ matrices from only about 10% more measurements than the number of degrees of freedom.
- Industrial and Applied Mathematics Seminar