Author
Murray, M
Tanner, J
Journal title
IEEE Data Science Workshop 2018
DOI
10.1109/DSW.2018.8439894
Last updated
2024-04-11T11:58:42.76+01:00
Abstract
Deep convolutional sparse coding (D-CSC) is a framework reminiscent of deep convolutional neural nets (DCNN), but by omitting the learning of the dictionaries one can more transparently analyse the role of the activation function and its ability to recover activation paths through the layers. Papyan, Romano, and Elad conducted an analysis of such an architecture [1], showed the relationship with DCNNs, and proved conditions under which a D-CSC is guaranteed to recover activation paths. A technical innovation of their work highlights that one can view the efficacy of the ReLU nonlinear activation function of a DCNN through the new variant of the tensor’s sparsity, referred to as stripe-sparsity, and by which they can prove that the density of activations can be proportional to the ambient dimension of the data. We extend their uniform guarantees to a slightly modified model and prove that with high probability the desired activation is typically possible to recover for a greater density of activations per layer. Our extension follows from incorporating the prior work on one step thresholding by Schnass and Vandergheynst into the appropriately modified architecture of [1].
Symplectic ID
846459
Favourite
Off
Publication type
Conference Paper
ISBN-13
9781538644119
Publication date
20 Aug 2018
Please contact us with feedback and comments about this page. Created on 07 May 2018 - 10:39.