We introduce so-called functional input neural networks defined on infinite dimensional weighted spaces, where we use an additive family as hidden layer maps and a non-linear activation function applied to each hidden layer. Relying on approximation theory based on Stone-Weierstrass and Nachbin type theorems on weighted spaces, we can prove global universal approximation results for (differentiable and) continuous functions going beyond approximation on compact sets. This applies in particular to approximation of (non-anticipative) path space functionals via functional input neural networks but also via linear maps of the signature of the respective paths. We apply these results in the context of stochastic portfolio theory to generate path dependent portfolios that are trained to outperform the market portfolio. The talk is based on joint works with Philipp Schmocker and Josef Teichmann.
Mon, 28 Nov 2022
15:30 - 16:30