Intelligible Cloud and Edge AI (ICE-AI)


ICE-AI addresses how AI systems can support human trust and ethically-sensitive design. This is done by exploring the development of robust user behaviours and perception of AI in the cloud and AI at the edge.

As AI technology becomes more pervasive, so too do issues around human trust in AI. Without ethically-sensitive design, AI can cause unintended biases. These can affect outcomes and erode human trust. This project considers social and conceptual understandings with user experience of algorithmic systems in the cloud and at the edge.

ICE-AI utilises interviews and observations conducted with users, as well as expert interviews, observations and explorations of people and culture. Lab and user studies using identified cultural problems are also explored. The team aim to develop and evaluate at least one prototype interface.