Seminar series
Date
Mon, 21 Feb 2022
Time
14:00 - 15:00
Location
Virtual
Speaker
Anders Hansen
Organisation
University of Cambridge

The alchemists wanted to create gold, Hilbert wanted an algorithm to solve Diophantine equations, researchers want to make deep learning robust in AI, MATLAB wants (but fails) to detect when it provides wrong solutions to linear programs etc. Why does one not succeed in so many of these fundamental cases? The reason is typically methodological barriers. The history of  science is full of methodological barriers — reasons for why we never succeed in reaching certain goals. In many cases, this is due to the foundations of mathematics. We will present a new program on methodological barriers and foundations of mathematics,  where — in this talk — we will focus on two basic problems: (1) The instability problem in deep learning: Why do researchers fail to produce stable neural networks in basic classification and computer vision problems that can easily be handled by humans — when one can prove that there exist stable and accurate neural networks? Moreover, AI algorithms can typically not detect when they are wrong, which becomes a serious issue when striving to create trustworthy AI. The problem is more general, as for example MATLAB's linprog routine is incapable of certifying correct solutions of basic linear programs. Thus, we’ll address the following question: (2) Why are algorithms (in AI and computations in general) incapable of determining when they are wrong? These questions are deeply connected to the extended Smale’s 9th and 18th problems on the list of mathematical problems for the 21st century. 

Please contact us with feedback and comments about this page. Last updated on 03 Apr 2022 01:32.