On Distribution Shifts and What We Can Learn From Them

When a distribution shift occurs, we can no longer guarantee our ML model will perform well.

Abstract

Distribution shifts can have fundamental significance such as signaling a change in the operating environment or substantially reducing the accuracy of downstream models. Thus, understanding distribution shifts is critical for examining and hopefully mitigating the effect of such a shift. This talk hopes to first give deep insights into general distribution shifts and the methods we have developed to help users gain better understandings of specific distribution shift using interpretable transportation maps. After this, we will look at ways to avoid the problems caused by distribution shifts via methods like domain adaptation or, better yet, domain generalization. Finally, we will take a step back and hypothesize on whether distribution/domain shifts can be seen as “a feature rather than a bug” and whether this viewpoint can lead us to better generalizable machine learning models.

Date
Feb 1, 2023 1:00 PM
Location
Virtual

The talk can be viewed via clicking on the Video icon at the top or you can see a PDF of the slides via the Slides Icon!

Please reach out if you have any questions. Cheers :)

Sean Kulinski
Sean Kulinski
GenAI Research Scientist

My research interest lies at making more reliable generative AI models. Namely, this means the problems I tend to work with are robust ML, capabilities focused pretraining/finetuning, model failure analysis, and all that comes with it.