RETCON explores creative approaches to understanding how AI at the edge can help bolster security, resilience, and privacy within IoT-rich environments by extending and supporting emerging kinds of red teams.
Both artificial intelligence and IoT introduce new challenges to privacy, security and resilience of connected environments. For instance, AI methods can be used to increase the precision and scale of attacks, by automating aspects such as intelligence gathering, target selection, and attack execution. Second, IoT devices greatly increase the amount of data captured about people, which can, in turn, result in data leaks and significant privacy risks.
At the same time, less has been explored about how AI techniques and IoT devices could used to bolster and improve privacy and security of individual users. RETCON explores this angle, by taking the metaphor of the “red team”: a team of experts brought in to proactively help identify weaknesses in systems and organisations. RETCON explores the idea of new kinds of red teams enhanced by AI: “bot red teams” where AI systems embedded within IoT proactively serve a red-team role to increase resilience, and AI-supported red teams in which AI methods support human-red teams to increase efficiency and effectiveness.
RETCON will design and prototype AI-enabled methods to play the role of the adversary to test and improve the resilience of various IoT-rich contexts, such as the smart home. Using deepfake technology, such AI might even ‘pretend to be human’ to test the resilience of individuals against phishing attacks. RETCON will look at the challenges and potential for the use of privacy preserving AI methods in regulatory red teams, such towards enabling ICO red teams to ascertain data protection compliance. RETCON provides benefit to other projects, such as testing and improving the design patterns generated through the RIoTE project. The RETCON project will also work with other PETRAS projects, such as the UncanAI project (Lancaster).
Red team: Red team refers to a friendly attack on a system that is used to test the defences of digital infrastructure. This methodology is used to strengthen the defence from a real-world attack.
Deepfake: A piece of synthetic media, such as video or audio recording, created by a machine learning algorithm that realistically impersonates or recreates the likeness of a person using real data as training data.