Blog: What if AI says “I don’t know”?

Information can be ambiguous or missing, and “I don’t know” (from AI or human) can be the honest answer.

Share This post

LACK OF AN ANSWER STARTS A CONVERSATION

We all say it in conversation on a regular basis: “I don’t know.” We might say that because we don’t have enough information to answer the question. Maybe that information is unavailable. Or, we might say “I don’t know” because we can see several possible answers to the question and we are feeling uncertain, or want to be careful. We might say it because the question that was asked was ambiguous, and we need to follow up with more questions of our own in order to clarify the original question before answering. 

In traditional search engines, there really isn’t any concept of returning an “I don’t know” response. Sometimes a traditional search will return “0 results,” meaning that it failed to match your query terms, and other times it may return a list of results that just don’t make sense, because they are not relevant. To get past the implied “I don’t know,” a frustrated user has to think of another way to ask the question, and reformulate their search query to try again. 

Yet in Human-AI communication, people often assume AI should have an answer to whatever they ask. When AI involves natural language, there is often an assumption that there will be a precise answer. Or… maybe a hallucination from current large language models? (but it’s still something, a statement.) The declarative language style often seen in ChatGPT, for example, delivers responses with words that signal certainty. 

Designing for “I don’t know” is actually a key to success for next generation AI. As we move from traditional search to more dynamic, real-time communication with AI-driven searching, we need to move beyond static, prescriptive interactions toward AI systems that are trained to recognize uncertainty, assess its causes, and effectively support dynamic exchanges. 

WHAT HAPPENS IF AI HAS TROUBLE CHOOSING AN ANSWER?

It’s normal for AI to encounter situations where there are multiple possible answers. For example: 

  • The AI determines that there is emerging information, or changes in perspective, within the source information that is available. It can notify the user of this emerging variance with their area of interest, and then iterate explanations with the user to identify the relevance of that emerging information. 
  • The AI identifies “noise” within the retrieved information, which results in a low confidence. It can respond by asking the user for an assessment of potential higher-value areas of focus (e.g. having the user rank parameter importance, or reframe the request using particular terms suggested by the AI system). 
  • The AI identifies that there may be model drift, or information outside the scope of the AI’s training domain that is causing misalignment. This may result in referring the user to some other sources or delaying response and escalating the request. 

 

WHAT HAPPENS IF AI DOESN’T HAVE ENOUGH INFORMATION?

AI doesn’t generally suffer from a lack of information in the same way that humans can, but it might lack enough information to interpret the request from the user. Insufficient specificity or clarity in the information request from the user can lead to very little information being returned, or a very diffuse spread of information (thus the ambiguity in response). Options for the AI to address this ambiguity include: 

  • Requesting a reframing of the query from the user, including asking for more specific contextual details that help target the query within the information space. 
  • Clarifying contradictory information within the user’s request.
  • Providing an explanation of the profile of the overall information space, its categories and specialties, to help the user orient to what is available. 
  • Reflecting back to the user a summary of what was requested, and a comparison with related information that is available, to help the user reframe their request.


Uncertainty can actually lead to 
expanding and improving communication between the user and the AI system. Internal AI models focused on communication can provide clues for clarifying the user’s needs within an information space. With AI at work in a search system, the burden is no longer solely on the user to reformulate their query, and doing so does not need to be frustrating. Explanatory AI can ask questions as part of its explanations – balancing statements that help a person understand the information space (and its limitations) with questions that align the person’s needs to the available information. 

 

DESIGNING FOR “I DON’T KNOW”

When “I don’t know” appears in cycles of testing and validation, either for new code or new information sources/training, what challenges does that raise for AI designers and developers?  

This is where statistical analysis, visualization, and assessments of noise in source data come into their own. A clearly focused suite of tests that address sparsity, information volatility, and conflicting models within the AI ecosystem can surface likely problems. This allows further training on pattern recognition and mitigation in the feedback loops and conversations with users. 

There is also an implication here for the training of effective AI systems. Training models are needed for the AI engine to identify its “I don’t know” situations, and the potential causes. Then communication/interaction models guide the system through communication with the user.

More To Explore