Human-Centered AI Archives | Oxide AI https://oxide.ai/tag/human-centered-ai/ Quantified AI Decisions™ Mon, 22 Jan 2024 11:27:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://oxide.ai/wp-content/uploads/2022/04/o_icon.svg Human-Centered AI Archives | Oxide AI https://oxide.ai/tag/human-centered-ai/ 32 32 Blog: What if AI says “I don’t know”? https://oxide.ai/2023-04-12-blog-ai-does-not-know/ Wed, 12 Apr 2023 11:54:00 +0000 https://oxide.ai/?p=4686 Should we assume AI always has an answer to whatever we ask? Information can be ambiguous or missing, and “I don’t know” (from AI or human) can be the honest answer. AI needs to be trained to recognize uncertainty, assess causes, and effectively support human-AI exploration.

The post Blog: What if AI says “I don’t know”? appeared first on Oxide AI.

]]>

Blog: What if AI says “I don’t know”?

Information can be ambiguous or missing, and “I don’t know” (from AI or human) can be the honest answer.

Share This post

LACK OF AN ANSWER STARTS A CONVERSATION

We all say it in conversation on a regular basis: “I don’t know.” We might say that because we don’t have enough information to answer the question. Maybe that information is unavailable. Or, we might say “I don’t know” because we can see several possible answers to the question and we are feeling uncertain, or want to be careful. We might say it because the question that was asked was ambiguous, and we need to follow up with more questions of our own in order to clarify the original question before answering. 

In traditional search engines, there really isn’t any concept of returning an “I don’t know” response. Sometimes a traditional search will return “0 results,” meaning that it failed to match your query terms, and other times it may return a list of results that just don’t make sense, because they are not relevant. To get past the implied “I don’t know,” a frustrated user has to think of another way to ask the question, and reformulate their search query to try again. 

Yet in Human-AI communication, people often assume AI should have an answer to whatever they ask. When AI involves natural language, there is often an assumption that there will be a precise answer. Or… maybe a hallucination from current large language models? (but it’s still something, a statement.) The declarative language style often seen in ChatGPT, for example, delivers responses with words that signal certainty. 

Designing for “I don’t know” is actually a key to success for next generation AI. As we move from traditional search to more dynamic, real-time communication with AI-driven searching, we need to move beyond static, prescriptive interactions toward AI systems that are trained to recognize uncertainty, assess its causes, and effectively support dynamic exchanges. 

WHAT HAPPENS IF AI HAS TROUBLE CHOOSING AN ANSWER?

It’s normal for AI to encounter situations where there are multiple possible answers. For example: 

  • The AI determines that there is emerging information, or changes in perspective, within the source information that is available. It can notify the user of this emerging variance with their area of interest, and then iterate explanations with the user to identify the relevance of that emerging information. 
  • The AI identifies “noise” within the retrieved information, which results in a low confidence. It can respond by asking the user for an assessment of potential higher-value areas of focus (e.g. having the user rank parameter importance, or reframe the request using particular terms suggested by the AI system). 
  • The AI identifies that there may be model drift, or information outside the scope of the AI’s training domain that is causing misalignment. This may result in referring the user to some other sources or delaying response and escalating the request. 

 

WHAT HAPPENS IF AI DOESN’T HAVE ENOUGH INFORMATION?

AI doesn’t generally suffer from a lack of information in the same way that humans can, but it might lack enough information to interpret the request from the user. Insufficient specificity or clarity in the information request from the user can lead to very little information being returned, or a very diffuse spread of information (thus the ambiguity in response). Options for the AI to address this ambiguity include: 

  • Requesting a reframing of the query from the user, including asking for more specific contextual details that help target the query within the information space. 
  • Clarifying contradictory information within the user’s request.
  • Providing an explanation of the profile of the overall information space, its categories and specialties, to help the user orient to what is available. 
  • Reflecting back to the user a summary of what was requested, and a comparison with related information that is available, to help the user reframe their request.


Uncertainty can actually lead to 
expanding and improving communication between the user and the AI system. Internal AI models focused on communication can provide clues for clarifying the user’s needs within an information space. With AI at work in a search system, the burden is no longer solely on the user to reformulate their query, and doing so does not need to be frustrating. Explanatory AI can ask questions as part of its explanations – balancing statements that help a person understand the information space (and its limitations) with questions that align the person’s needs to the available information. 

 

DESIGNING FOR “I DON’T KNOW”

When “I don’t know” appears in cycles of testing and validation, either for new code or new information sources/training, what challenges does that raise for AI designers and developers?  

This is where statistical analysis, visualization, and assessments of noise in source data come into their own. A clearly focused suite of tests that address sparsity, information volatility, and conflicting models within the AI ecosystem can surface likely problems. This allows further training on pattern recognition and mitigation in the feedback loops and conversations with users. 

There is also an implication here for the training of effective AI systems. Training models are needed for the AI engine to identify its “I don’t know” situations, and the potential causes. Then communication/interaction models guide the system through communication with the user.

More To Explore

The post Blog: What if AI says “I don’t know”? appeared first on Oxide AI.

]]>
4686
Blog: A 2-way Street of Explanatory AI https://oxide.ai/2023-01-30-blog-explanatory-ai/ Mon, 30 Jan 2023 11:24:00 +0000 https://oxide.ai/?p=4648 Are AI systems inherently explainable, or inexplicable? Different types of systems and uses of AI present the need for different types of explanation capabilities — and different challenges to get there. This post explores considerations that shape explanation in human-AI teaming.

The post Blog: A 2-way Street of Explanatory AI appeared first on Oxide AI.

]]>

Blog: A 2-way Street of Explanatory AI

Explore considerations that shape explanation in human-AI teaming!

Share This post

SCRUTINY

 

Explanatory AI (XAI) is vital for establishing trust among direct users, stakeholders, and society at large. There is a wealth of scrutiny now focused on inscrutable AI systems and the quality of any particular system’s source data, with third-party research and advocacy groups trying to peer into what is often called a “black box,” where it is too dark to see what goes on inside. Emerging legislation (for example in the EU, UK and US) is also shining a spotlight on AI, although it will take some time to move from illuminating the outside of the box to actually have a clear view inside (see our earlier article on EU’s Road to AI Legislation).


An increase in academic literature is another indicator: In the past two years, searches indicate 75% of the XAI articles in the ACM Digital Library have been published, and just under 50% of XAI articles in arXiv. Academic focus on the topic of XAI has expanded dramatically, alongside public attention on AI trustworthiness and ethics.

 

WHAT DO PEOPLE EXPECT FROM AN “EXPLANATION”?

 

Current calls for explanation tend to focus on the need for AI systems to explain sources, processing, assumptions, and outputs. In simple terms, an AI system’s “explanations” must turn complex math into human-understandable expressions (which may be textual, visual, numeric, pattern representations, spoken, etc.). Yet an AI system may also need to explain more than its outputs. It ideally could include its interpretation of user requests, express levels of confidence based on model “fit” between a request and available data, and describe any associated processing, interpretation or limitations.

 

Useful explanations should provide transparency and intelligibility – increasing the recipient’s understanding. In the case of AI-supported search, this can include details about the underlying information space and sources, model and scoring assumptions, and the “what, why and how” information is retrieved. User confidence in an AI system may hinge on how an explanation reinforces or undermines trust. 

 

Yet there is more to explain within human-AI interactions. 

 

WHY IS EXPLANATORY AI A “2-WAY STREET”?

 

XAI descriptions often place the system in the role of explainer, and the user in the role of recipient, but this idea of roles is incomplete. Communication is 2-way, with a user as an active participant in explanations, whether asking questions that prompt the need for explanation, interpreting and responding to explanations that are received, or explaining instructions given to the AI system.

 

Users also do the explaining by providing elaboration about their contexts and intent, making decisions, and giving further input about their external (non-system) actions, in order to refine the ongoing search task.  

 

AFTER AN EXPLANATION… NEXT STEPS

 

When designing for human-AI interactions, we need to anticipate the types of trajectories that could follow an explanation, such as: 

  • Challenging the explanation (and thus the information provided) 
  • Altering the nature of the search/request (refining or changing direction) 
  • Clarifying contexts (increasing or decreasing parameters within the search) 
  • Repeating/revisiting (pushing further into the information space of the initial search) 
  • Abandoning the request

     

XAI FOR AI DESIGNERS

 

Who else has a need for clear explanations? Importantly, the designers and developers of an AI system need to routinely understand its performance and tune its behaviors to increase its capabilities and limit its potential harms. The need for XAI starts when a product is being designed, and then continuously throughout its ongoing operation, because the people working on it need to have this information to make good decisions. AI is not like static applications, where design and development might “create it, release it, and just let it run” until the next release. AI tools need constant attention to make sure that while the system continues to learn from user interaction and from new data, and encountering new situations, it continues to perform effectively and ethically. 

 

EXPLORING TYPES OF EXPLANATION

 

In future posts, we can explore various types of explanation, and the conditions where they may be needed by all the parties in human-AI design and use.

 

Substantive: How does an explanation express the scope, volatility, and nature of changes in the information space? Is it feasible to provide explanatory statements about source authority, or even areas of uncertainty or bias? What visualizations and expressions can help users understand sparsity, or density, or semantic distance in the information they receive? Can that help users appreciate the range in scoring/ranking of information in results provided by an AI search system? When is degree of certainty a critical factor in an explanation? 

 

Contextual: Can an explanation simply and efficiently reflect back to a user the system’s recognition of user context(s)? Can it reflect understanding of urgency, types of tasks and goals, alignment with the experience level of the user, and how the roles of the users/recipients affect the character of the information that is discovered and presented? 

 

Cultural: In the crafting of explanations, how is cultural awareness reflected? Even if the AI system is able to pick up cultural alignment clues within a user’s context, how does it align explanations with possible expectations of authority, deference, degrees of transparency, etc.? Do those aspects affect an understanding of the user intent? How are they interpreted, in ways that could influence trust?

 

Relational: In what way can/should previous interactions between the user and the system be reflected in the style of explanations? Does this affect familiarity, consistency of interactions, and the nuanced and important “trustworthiness”?

 

Evolving: There are many things that need to be monitored as part of ongoing development testing, classification, data refinement, and use. What variations arise in the characteristics of the information space as data and content is refreshed/added, and what effect might that have on accuracy or reliability? What is the effect of changes in the features profile, as well as emergent properties that may need additional scrutiny? Are competing models or structures being formed within the AI system? And how is traceability, and particular areas of focus, reported? 


Plenty of questions to explore in future posts. Your feedback is welcome. 

More To Explore

The post Blog: A 2-way Street of Explanatory AI appeared first on Oxide AI.

]]>
4648