The AMLoE project investigates vulnerabilities of machine learning (ML) when deployed on the edge and aims to produce new means of testing and improving their robustness whilst not compromising significantly on accuracy and performance.
Whilst the vulnerability of machine learning has been amply demonstrated, few studies address the challenges of adversarial attacks on ML components when deployed at the Edge. This is particularly concerning as the deployment of ML on the edge is rapidly increasing and there is much to gain from it. ML models deployed on the edge are compressed through pruning, quantisation or other techniques to cope with the limitations of edge devices. But the effect of such techniques, typically optimising performance parameters, on the robustness of the algorithms to adversarial attacks needs to be investigated.
This project brings together an academic team with experience in adversarial machine learning, and a set of user partners (DSTL, Thales, DataSpartan) with complementary expertise, to investigate the adversarial robustness of machine learning algorithms deployed on edge components, evaluate the effect of attacks that target edge components and design new methods to improve robustness.