Uncanny AI (UnCanAI)

L I M

UnCanAI will consider how trust can be engendered in users when Artificial Intelligent (AI) agents are used to provide services directly to users. It will explore the notion of the Turing Red Flag Law whereby autonomous systems would be required to be designed to prevent them being mistaken for one controlled by a human.

Whilst such a law would appear rather a blunt instrument, it does highlight a need for research that challenges the prevalent assumption that ‘human-like’ interactions are always preferable by exploring whether or not these interactions increase vulnerability to attacks such as phishing by producing unrealistic expectations of trust.

To enable a more nuanced approach than a Turing Red Flag, we will consider the notion an ‘Uncanny Valley for AI’. Through the creation of a series of speculative design artefacts which relate to a variety of contexts of use and potential user groups, the project will map how different degrees of ‘human-likeness’ impact upon users’ trust of autonomous agents.