Date
Tue, 21 May 2019
Time
14:30 - 15:00
Location
L5
Speaker
Giuseppe Ughi
Organisation
Oxford

Neural Network algorithms have achieved unprecedented performance in image recognition over the past decade. However, their application in real world use-cases, such as self driving cars, raises the question of whether it is safe to rely on them.

We generally associate the robustness of these algorithms with how easy it is to generate an adversarial example: a tiny perturbation of an image which leads it to be misclassified by the Neural Net (which classifies the original image correctly). Neural Nets are strongly susceptible to such adversarial examples, but when the architecture of the target neural net is unknown to the attacker it becomes more difficult to generate these examples efficiently.

In this Black-Box setting, we frame the generation of an adversarial example as an optimisation problem solvable via derivative free optimisation methods. Thus, we introduce an algorithm based on the BOBYQA model-based method and compare this to the current state of the art algorithm.

Please contact us with feedback and comments about this page. Last updated on 04 Apr 2022 14:57.