We build generative models by learning latent causal models from data observed from different domains for the purpose of generating domain counterfactuals, and further characterize the equivalence classes for such latent causal models.
We answer the question: ‘‘What is a distribution shift explanation?’’ and introduce a novel framework for explaining distribution shifts via transportation maps between a source and target distribution which are either inherently interpretable or interpreted using post-hoc interpretability methods.
We introduce a large-scale easy to use spatial reasoning including 3.6 million images summarizing 10-seconds of human-played matches from the StarCraft II video game.
Focusing on distributions shifts pertaining to images, we use interpretable transport maps between the latent image spaces of a source and a target distribution to explain how to align the source to the target distribution.
We formalize the problem of feature shift, and introduce a method for fast and simultaneous detection of domain shifts and localizing the shift to specific feature(s).