Blog: What if AI says “I don’t know”?

Should we assume AI always has an answer to whatever we ask? Information can be ambiguous or missing, and “I don’t know” (from AI or human) can be the honest answer. AI needs to be trained to recognize uncertainty, assess causes, and effectively support human-AI exploration.

Blog: A 2-way Street of Explanatory AI

Are AI systems inherently explainable, or inexplicable? Different types of systems and uses of AI present the need for different types of explanation capabilities — and different challenges to get there. This post explores considerations that shape explanation in human-AI teaming.

Blog: EU’s Road to AI Legislation

BlackBox

The AI Act proposed by the European Commission puts substantial regulatory burden on AI systems classified as high-risk. EU’s road to AI legislation seems to be about increased transparency of AI systems.