Skip to yearly menu bar Skip to main content


Keynote

Is distribution shift still an AI problem?

Sanmi Koyejo

Gold Room / Auditorium / Silver Room
[ ]
Thu 3 Oct 6:30 a.m. PDT — 7:30 a.m. PDT

Abstract:

Distribution shifts describe the phenomena where the deployment performance of an AI model exhibits differences from training. On the one hand, some claim that distribution shifts are ubiquitous in real-world deployments. On the other hand, modern implementations (e.g., foundation models) often claim to be robust to distribution shifts by design. Similarly, phenomena such as “accuracy on the line” promise that standard training produces distribution-shift-robust models. When are these claims valid, and do modern models fail due to distribution shifts? If so, what can be done about it? This talk will outline modern principles and practices for understanding the role of distribution shifts in AI, discuss how the problem has changed, and outline recent methods for engaging with distribution shifts with comprehensive and practical insights. Some highlights include a taxonomy of shifts, the role of foundation models, and finetuning. This talk will also briefly discuss how distribution shifts might interact with AI policy and governance.

Live content is unavailable. Log in and register to view live content