Seminar series
Date
Fri, 10 May 2019
Time
12:00 - 13:00
Location
L4
Speaker
John Wright
Organisation
Columbia University

The problem of decomposing a given dataset as a superposition of basic motifs arises in a wide range of application areas, including neural spike sorting and the analysis of astrophysical and microscopy data. Motivated by these problems, we study a "short-and-sparse" deconvolution problem, in which the goal is to recover a short motif a from its convolution with a random spike train $x$. We formulate this problem as optimization over the sphere. We analyze the geometry of this (nonconvex) optimization problem, and argue that when the target spike train is sufficiently sparse, on a region of the sphere, every local minimum is equivalent to the ground truth, up to symmetry (here a signed shift). This characterization obtains, e.g., for generic kernels of length $k$, when the sparsity rate of the spike train is proportional to $k^{-2/3}$ (i.e., roughly $k^{1/3}$ spikes in each length-$k$ window). This geometric characterization implies that efficient methods obtain the ground truth under the same conditions. 

 

Our analysis highlights the key roles of symmetry and negative curvature in the behavior of efficient methods -- in particular, the role of a "dispersive" structure in promoting efficient convergence to global optimizers without the need to explicitly leverage second-order information. We sketch connections to broader families of benign nonconvex problems in machine learning and signal processing, in which efficient methods obtain global optima independent of initialization. These problems include variants of sparse dictionary learning, tensor decomposition, and phase recovery.

 

Joint work with Yuqian Zhang, Yenson Lau, Han-Wen Kuo, Dar Gilboa, Sky Cheung, Abhay Pasupathy

Please contact us with feedback and comments about this page. Last updated on 03 Apr 2022 01:32.