Sean Kulinski
Sean Kulinski
Home
Publications
Experience
Contact
CV
Light
Dark
Automatic
Explainable AI
Towards Explaining Distribution Shifts
We answer the question: ‘‘What is a distribution shift explanation?’’ and introduce a novel framework for explaining distribution shifts via transportation maps between a source and target distribution which are either inherently interpretable or interpreted using post-hoc interpretability methods.
Sean Kulinski
,
David I. Inouye
PDF
Cite
Code
Slides
Towards Explaining Image-Based Distribution Shifts
Focusing on distributions shifts pertaining to images, we use interpretable transport maps between the latent image spaces of a source and a target distribution to explain how to align the source to the target distribution.
Sean Kulinski
,
David I. Inouye
PDF
Cite
Code
Cite
×