MIT Deep Learning 6.S191

Talk Abstract

Deep learning models are bad at signalling failure: They tend to make predictions with high confidence, and this is problematic in real-world applications such as healthcare, self-driving cars, and natural language systems, where there are considerable safety implications, or where there are discrepancies between the training data and data that the model makes predictions on. There is a pressing need both for understanding when models should not make predictions and improving model robustness to natural changes in the data. In this talk, I'll give a very abridged version of my NeurIPS tutorial on uncertainty and robustness in deep learning and then introduce some more recent work developed to address these challenges.

Speaker Bio

Jasper Snoek is currently a staff research scientist at Google Brain. Recently his research has focused on methods for improving uncertainty and robustness of deep learning methods. His interests span a variety of topics at the intersection of Bayesian methods and deep learning. He completed his PhD in machine learning at the University of Toronto. He subsequently held postdoctoral fellowships at the University of Toronto, under Geoffrey Hinton and Ruslan Salakhutdinov, and at the Harvard Center for Research on Computation and Society, under Ryan Adams. Jasper co-founded the machine learning startup Whetlab, which was acquired by Twitter. He has served as an Area Chair for NeurIPS, ICML and ICLR, and organized a variety of workshops at ICML and NeurIPS.