In recent years, a new class of deep neural networks has emerged, which finds its roots at model-based iterative algorithms solving inverse problems. We call these model-based neural networks deep unfolding networks (DUNs). The term is coined due to their formulation: the iterations of optimization algorithms are “unfolded” as layers of neural networks, which solve the inverse problem at hand. Ever since their advent, DUNs have been employed for tackling assorted problems, e.g., compressed sensing (CS), denoising, super-resolution, pansharpening.
In this talk, we will revisit the application of DUNs on the CS problem, which pertains to reconstructing data from incomplete observations. We will present recent trends regarding the broader family of DUNs for CS and dive into their theory, which mainly revolves around their generalization performance; the latter is important, because it informs us about the behaviour of a neural network on examples it has never been trained on before.
Particularly, we will focus our interest on overparameterized DUNs, which exhibit remarkable performance in terms of reconstruction and generalization error. As supported by our theoretical and empirical findings, the generalization performance of overparameterized DUNs depends on their structural properties. Our analysis sets a solid mathematical ground for developing more stable, robust, and efficient DUNs, boosting their real-world performance.