In the past years, deep learning algorithms have been applied to numerous classical problems from mathematical finance. In particular, deep learning has been employed to numerically solve high-dimensional derivatives pricing and hedging tasks. Theoretical foundations of deep learning for these tasks, however, are far less developed. In this talk, we start by revisiting deep hedging and introduce a recently developed adversarial training approach for making it more robust. We then present our recent results on theoretical foundations for approximating option prices, solutions to jump-diffusion PDEs and optimal stopping problems using (random) neural networks, allowing to obtain more explicit convergence guarantees. We address neural network expressivity, highlight challenges in analysing optimization errors and show the potential of random neural networks for mitigating these difficulties.