Deep learning has revolutionized image, text, and speech recognition. Motivated by this success, there is growing interest in developing deep learning methods for financial applications. We will present some of our recent results in this area, including deep learning models of high-frequency data. In the second part of the talk, we prove a law of large numbers for single-layer neural networks trained with stochastic gradient descent. We show that, depending upon the normalization of the parameters, the law of large numbers either satisfies a deterministic partial differential equation or a random ordinary differential equation. Using similar analysis, a law of large numbers can also be established for reinforcement learning (e.g., Q-learning) with neural networks. The limit equations in each of these cases are discussed (e.g., whether a unique stationary point and global convergence can be proven).