Towards Explaining Image-Based Distribution Shifts

Abstract

Distribution shift can have fundamental consequences such as signaling a change in the operating environment or significantly reducing the accuracy of downstream models. Thus, understanding such distribution shifts is critical for examining and hopefully mitigating the effect of such a shift. Most prior work has focused on either natively handling distribution shift (e.g., Domain Generalization) or merely detecting a shift while assuming any detected shift can be understood and handled appropriately by a human operator. For the latter, we hope to aid in these manual mitigation tasks by explaining the distribution shift to an operator. To this end, we suggest two methods: providing a set of interpretable mappings from the original distribution to the shifted one or providing a set of distributional counterfactual examples. We provide preliminary experiments on these two methods, and discuss important concepts and challenges for moving towards a better understanding of image-based distribution shifts.

Publication
CVPR Workshop on Vision Datasets Understanding

Our paper can be found via clicking on the PDF icon at the top! Please reach out if you have any questions. Cheers :)

Sean Kulinski
Sean Kulinski
GenAI Research Scientist

My research interest lies at making more reliable generative AI models. Namely, this means the problems I tend to work with are robust ML, capabilities focused pretraining/finetuning, model failure analysis, and all that comes with it.