Blog: A 2-way Street of Explanatory AI

Explore considerations that shape explanation in human-AI teaming!

Share This post

SCRUTINY

 

Explanatory AI (XAI) is vital for establishing trust among direct users, stakeholders, and society at large. There is a wealth of scrutiny now focused on inscrutable AI systems and the quality of any particular system’s source data, with third-party research and advocacy groups trying to peer into what is often called a “black box,” where it is too dark to see what goes on inside. Emerging legislation (for example in the EU, UK and US) is also shining a spotlight on AI, although it will take some time to move from illuminating the outside of the box to actually have a clear view inside (see our earlier article on EU’s Road to AI Legislation).


An increase in academic literature is another indicator: In the past two years, searches indicate 75% of the XAI articles in the ACM Digital Library have been published, and just under 50% of XAI articles in arXiv. Academic focus on the topic of XAI has expanded dramatically, alongside public attention on AI trustworthiness and ethics.

 

WHAT DO PEOPLE EXPECT FROM AN “EXPLANATION”?

 

Current calls for explanation tend to focus on the need for AI systems to explain sources, processing, assumptions, and outputs. In simple terms, an AI system’s “explanations” must turn complex math into human-understandable expressions (which may be textual, visual, numeric, pattern representations, spoken, etc.). Yet an AI system may also need to explain more than its outputs. It ideally could include its interpretation of user requests, express levels of confidence based on model “fit” between a request and available data, and describe any associated processing, interpretation or limitations.

 

Useful explanations should provide transparency and intelligibility – increasing the recipient’s understanding. In the case of AI-supported search, this can include details about the underlying information space and sources, model and scoring assumptions, and the “what, why and how” information is retrieved. User confidence in an AI system may hinge on how an explanation reinforces or undermines trust. 

 

Yet there is more to explain within human-AI interactions. 

 

WHY IS EXPLANATORY AI A “2-WAY STREET”?

 

XAI descriptions often place the system in the role of explainer, and the user in the role of recipient, but this idea of roles is incomplete. Communication is 2-way, with a user as an active participant in explanations, whether asking questions that prompt the need for explanation, interpreting and responding to explanations that are received, or explaining instructions given to the AI system.

 

Users also do the explaining by providing elaboration about their contexts and intent, making decisions, and giving further input about their external (non-system) actions, in order to refine the ongoing search task.  

 

AFTER AN EXPLANATION… NEXT STEPS

 

When designing for human-AI interactions, we need to anticipate the types of trajectories that could follow an explanation, such as: 

  • Challenging the explanation (and thus the information provided) 
  • Altering the nature of the search/request (refining or changing direction) 
  • Clarifying contexts (increasing or decreasing parameters within the search) 
  • Repeating/revisiting (pushing further into the information space of the initial search) 
  • Abandoning the request

     

XAI FOR AI DESIGNERS

 

Who else has a need for clear explanations? Importantly, the designers and developers of an AI system need to routinely understand its performance and tune its behaviors to increase its capabilities and limit its potential harms. The need for XAI starts when a product is being designed, and then continuously throughout its ongoing operation, because the people working on it need to have this information to make good decisions. AI is not like static applications, where design and development might “create it, release it, and just let it run” until the next release. AI tools need constant attention to make sure that while the system continues to learn from user interaction and from new data, and encountering new situations, it continues to perform effectively and ethically. 

 

EXPLORING TYPES OF EXPLANATION

 

In future posts, we can explore various types of explanation, and the conditions where they may be needed by all the parties in human-AI design and use.

 

Substantive: How does an explanation express the scope, volatility, and nature of changes in the information space? Is it feasible to provide explanatory statements about source authority, or even areas of uncertainty or bias? What visualizations and expressions can help users understand sparsity, or density, or semantic distance in the information they receive? Can that help users appreciate the range in scoring/ranking of information in results provided by an AI search system? When is degree of certainty a critical factor in an explanation? 

 

Contextual: Can an explanation simply and efficiently reflect back to a user the system’s recognition of user context(s)? Can it reflect understanding of urgency, types of tasks and goals, alignment with the experience level of the user, and how the roles of the users/recipients affect the character of the information that is discovered and presented? 

 

Cultural: In the crafting of explanations, how is cultural awareness reflected? Even if the AI system is able to pick up cultural alignment clues within a user’s context, how does it align explanations with possible expectations of authority, deference, degrees of transparency, etc.? Do those aspects affect an understanding of the user intent? How are they interpreted, in ways that could influence trust?

 

Relational: In what way can/should previous interactions between the user and the system be reflected in the style of explanations? Does this affect familiarity, consistency of interactions, and the nuanced and important “trustworthiness”?

 

Evolving: There are many things that need to be monitored as part of ongoing development testing, classification, data refinement, and use. What variations arise in the characteristics of the information space as data and content is refreshed/added, and what effect might that have on accuracy or reliability? What is the effect of changes in the features profile, as well as emergent properties that may need additional scrutiny? Are competing models or structures being formed within the AI system? And how is traceability, and particular areas of focus, reported? 


Plenty of questions to explore in future posts. Your feedback is welcome. 

More To Explore