<style> .ev { opacity: 1 !important; transform: translate(0) !important; } </style>

Designing Legible AI

We are currently looking for participants to take part in the series of workshops we have described in this article. We are keen to engage with a wide spectrum of participants, as most people in some-way interact with AI technology, though it is worth noting that a working knowledge of how AI works is not required. If you are interested in participating in the workshop and keeping up to date with the research – email f.pilling@lancaster.ac.uk or message on Twitter @_FranPilling.  

Or sign up straight away with the event link below!  

https://www.eventbrite.co.uk/e/creating-legible-ai-tickets-117497896371?ref=estw 

Designing Legible AI (markers for things): via (the new norm of) digital workshops 

The following article is a brief account of design research, tackling the issue of legibility and transparency in Artificial intelligence (AI). The subsequent account will demonstrate design’s ability to combine salient aspects of disparate disciplines into accessible and potential pathways towards legible AI. A vital part of design research is opening up the process to test, validate and disrupt the research through workshops. So, as well as publicising our research, we hope that you would come and join us in co-designing legible AI by coming to one of our digital workshops we detail towards the end of this article. 

Learning from other predicaments: the search for communicating a nuclear hazard  

Set up in 1999 in the Chihuahuan Desert, New Mexico, the Waste Isolation Pilot Plant (WIPP) is the United States national underground radioactive waste repository. It is designed to seal, store and entomb nuclear waste for perpetuity. The sealed tomb will ultimately solidify into the earth’s crust over approximately the next 200,000 years. However, the level of safety at the WIPP is a constant concern for the American government. Concerns for safety coalesce around two key issues: 1. The capabilities afforded by the physical structure of the plant. For instance, a leak in the plant in 2014 caused nuclear waste to be sprayed into the air and;  2. How the site is marked to legibly communicate the sites dangerous and precarious nature and whether these markings will still be understood by everyone, thousands of years into the future. This article will focus on this second issue; markers to communicate the obscurity of things. In the 1990’s the US government began to respond to this predicament by inviting linguists, architects, artists, writers, astrophysicists and geologists to form a think tank to co-design a way of communicating a message of universal legibility to last at least 10,000 years.   

The think tank systematically considered different methods to communicate information. They looked at language, symbols, visual storytelling and the team even considered the fabrication of an atmospheric notion of danger by erecting a “landscape of thorns” around the radioactive site(99pi, 2014.).  However, each proposition was thrown out, concluding that the messages were not precise enough or were unlikely to be legible in 10,000 years. For instance, language evolves expeditiously or can fall out of use entirely, as the transition from Elizabethan English to Modern English took only around 500 years, and in some instance’s languages of diminishing communities – be that due to conquest, empire or other reasons—go extinct. Symbols can also change meaning overtime by appropriation or other means. To illustrate this, the skull and crossbones that signify death to us, by way of stories of the Jolly Roger, once signified eternal life (Ferguson, 1954).  Whether or not we understand them as they were originally intended, symbols, iconography and markers elicit attention and develop a working relationship or pathway for acquiring knowledge about the symbolised. However, when nuclear waste is concerned inquisitiveness or attempting to find out more information is risky. Yet to make no attempt at a warning sign would be perilous. Thankfully, nuclear waste sites are sporadically placed globally, and for now, they are marked with the trefoil nuclear warning sign. However, it is thought that only 6% of the world recognise the meaning of the trefoil marker (Piesing, 2020) 

The predicament of Artificial Intelligence 

AI technology is, in recent years, commonplace and by its very nature obscure. It’s presence can go undisclosed to users, and often the operation of AI and how data is handled is hidden either intentionally or in the name of simplifying the function for the user.  On the other hand, it is worth noting, that it has become fashionable to identify a product as AI, as a unique selling point for consumers under the guise of improved efficiency, described by researchers as AI Snake Oil where “the technologically advanced products and services that lead us to believe that the impossible is now possible”(Author Unknown, ND). Though, AI optimisations that we know to be real are often welcomed and sought after by users, for the advertised ease it brings to everyday life. From Amazon’s Alexa reminding us of groceries to be replenished to the recommended for you functionality on Spotify. To this end, AI has already been plugged into a myriad of applications from parole to financial management, positioning algorithmic decision making as an emerging governing power. The consequences of this technology on society are significant. As AI applications and developments surge, together with scandals and mishaps in our haste to use them (e.g. the UK A-Level results 2020 (Chowdhury, 2020)), designers and users alike are starting to question how much information of the inner workings of the AI or the data captured should be revealed and communicated to the user. Could users benefit from prior knowledge and legibility of the functionalities of an AI-infused product? If so, what kinds of measures would need to be taken in order to ensure AI legibility and how might the presence of AI and it’s functionality be simply and accurately marked? 

Developing markers and iconography for Artificial Intelligence: our research 

In our research, we use the term legibility rather than transparency. This is an essential clarification as both terms are used interchangeably. However, they often describe subtly different things. Transparency is concerned with how open the data and algorithms are to outside scrutiny so that decisions can be verified or challenged. In some cases, this openness is suggested to the whole design and development process. Legibility relates to how we make AI systems and their decisions understood by non-AI experts. This body of research is concerned with the legibility of AI being established through markers and icons, communicating information about the AI in use. Or simply, to empower users to make their own value-judgements through a signpost system to enable attainment of further information.  

To address the legibility of AI, we started with a survey of images representing AI. What we found was that while some represent the underlying system (i.e. a neural network) and some might suggest what it’s doing (i.e. face detection), the vast majority play into the definitional dualism of either conflating AI with contemporary advances in robotics or the grand vision of sentient machines (Lindley et al., 2020). None of these typically associated icons suggests how the AI in use works, what it is doing, or why it is doing it and for whom. This highlights that there is a lack of semantics or communication within the existing imagery of AI, suggesting a need to develop a visual language which would help to enhance AI legibility or purely mark the presence of, in parallel to the problem of nuclear waste sites.  

As noted before, the timeframe for designing markers for things is an important consideration, which pointed towards taking inspiration from an already functional system of icons, such as clothes care labels. Whilst we may not always take notice of these symbols (though always present), or indeed always understand their meaning, they provide a means of understanding how we can most easily maintain a working relationship with our clothes and which have stood a test of time since their first introduction in 1971.  

To initially prototype our designs for AI iconography, we used a Research through Design (RtD) methodology. A unique and powerful facet of RtD is that it is generative in nature (Gaver, 2012), enabling us to thread and pull together various research ideas, theories and disciplines together. Such as, the theories discussed here, AI functionality to HCI and semiotics for icon design, which in a melting pot of design directly informed the first iteration of AI iconography design (Lindley et al., 2020).  

Next steps: Imagination Lancaster needs you!  

We are now at the stage of testing the rigour of our current AI iconography system and the suitability of using icons via a series of workshops. In the workshops, participants are introduced to the current icons, presented on cards which they are able to move, handle and make connections between different icons and descriptors.  Participants are then tasked to translate, comment, speculate and finally co-design their own icons. Though, like the majority of things this year, the initial plan for the workshop had to be adapted to a digital platform to continue the research through the pandemic.  

Rather than sourcing an online tool to support what was a face to face workshop, we instead developed an interactive workshop website to suit our research medium and recreate the game like mechanics of being able to think through moving the cards from one position to another, enabling participants to make sense of their meaning and establish connections between the inner workings of AI and their visual representations. Subsequently, the icons are presented in the form of digital cards and the tasks adapted to enable participants to contribute online.  

By using iterative volumes of workshops as a tool and as a platform to enter into discourse to question the legibility within AI systems, we hope to create and design a set of icons that are holistically legible to intended users and to produce a set of concentrated icons that are legible by design. Or simply defined as care labels for AI, signifying to the user that what once appeared impossible, or indistinguishable from magic, is very much possible, through various AI functions, e.g. machine learning trained on specific data collected. Thereby giving users the opportunity to question or assess for themselves the AI technology they use, or when a predictive output is applied to decision making. 

We are currently looking for participants to take part in the series of workshops we have described in this article. We are keen to engage with a wide spectrum of participants, as most people in some-way interact with AI technology, though it is worth noting that a working knowledge of how AI works is not required. If you are interested in participating in the workshop and keeping up to date with the research – email f.pilling@lancaster.ac.uk or message on Twitter @_FranPilling.  

Or sign up straight away with the event link below!  

https://www.eventbrite.co.uk/e/creating-legible-ai-tickets-117497896371?ref=estw 

Bibliography  

Author Unknown, ND. A.I. Snake Oil. [online] Available at: <https://designinvestigations.at/2020/07/04/a-i-snake-oil/>. 

Chowdhury, H., 2020. The algorithm that has ruined the A-level results of thousands of students. The Telegraph. [online] Aug. Available at: <https://www.telegraph.co.uk/technology/2020/08/13/algorithm-has-ruined-a-level-results-thousands-students/> [Accessed 20 Aug. 2020]. 

Ferguson, G., 1954. Signs and Symbols in Christian Art. Oxford University Press. 

Gaver, W., 2012. What should we expect from research through design? In: Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems – CHI ’12. [online] the 2012 ACM annual conference. Austin, Texas, USA: ACM Press.p.937. Available at: <http://dl.acm.org/citation.cfm?doid=2207676.2208538> [Accessed 13 May 2020]. 

Lindley, J., Akmal, H.A., Pilling, F. and Coulton, P., 2020. Researching AI Legibility through Design. CHI ’20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, p.13. 

Piesing, M., 2020. How to build a nuclear warning for 10,000 years’ time. BBC Future. [online] Available at: <https://www.bbc.com/future/article/20200731-how-to-build-a-nuclear-warning-for-10000-years-time> [Accessed 26 Aug. 2020]. 

Ten Thousand Years, n.d. Available at: <https://99percentinvisible.org/episode/ten-thousand-years/> [Accessed 6 Jan. 2020]. 

Related projects