Date
Mon, 07 Dec 2020
Time
16:00 - 17:00
Speaker
PATRICK CHERIDITO
Organisation
(ETH) Zurich

We develop a framework for showing that neural networks can overcome the curse of dimensionality in different high-dimensional approximation problems. Our approach is based on the notion of a catalog network, which is a generalization of a standard neural network in which the nonlinear activation functions can vary from layer to layer as long as they are chosen from a predefined catalog of functions. As such, catalog networks constitute a rich family of continuous functions. We show that under appropriate conditions on the catalog, catalog networks can efficiently be approximated with ReLU-type networks and provide precise estimates on the number of parameters needed for a given approximation accuracy. As special cases of the general results, we obtain different classes of functions that can be approximated with ReLU networks without the curse of dimensionality. 

 

A preprint is here: https://arxiv.org/abs/1912.04310

Please contact us with feedback and comments about this page. Last updated on 03 Apr 2022 01:32.