Can We Rely on AI?

Desmond John Higham1

1 University of Edinburgh, Edinburgh, UK

d.j.higham [at] ed.ac.uk

Abstract

Over the last decade, adversarial attack algorithms have revealed instabilities in deep learning tools. These algorithms raise issues regarding safety, reliability and interpretability in artificial intelligence (AI); especially in high risk settings.

At the heart of this landscape are ideas from optimization, numerical analysis and high dimensional stochastic analysis. From a practical perspective, there has been a war of escalation between those developing attack and defence strategies. At a more theoretical level, researchers have also studied bigger picture questions concerning the existence and computability of successful attacks. We will present examples of attack algorithms in image classification and optical character recognition. We will also outline recent results on the overarching question of whether, under reasonable assumptions, it is inevitable that AI tools will be vulnerable to attack.

Keywords: stability, adversarial attack, regulation of AI