GLANCE: Global Actions in a Nutshell for Counterfactual Explainability

Publication
The Fortieth AAAI Conference on Artificial Intelligence (AAAI-26)

We propose a method for global explainability of black box models using counterfactual explanations. A counterfactual explanation locally explains an outcome by providing the minimal changes necessary to reverse the outcome, e.g., “if you had five more years of experience, your job application would have been accepted”. We develop a method, termed GLANCE, that summarizes all counterfactual explanations for a given model.

image
Three actions that summarize all counterfactual explanations.

Solving this global version of counterfactual explainability is different than finding the local counterfatual explanations and picking among them.

image
A toy example depicting two negative instances x1, x2, and five actions. (a) The feature space; the line is the decision boundary. (b) The action space; l1, l2 depict the de- cision boundary from the perspective of x1, x2, respectively.