Jason Hartford
Title: "Some steps towards causal representation learning".
Abstract:
High-dimensional unstructured data such images or sensor data can often be collected cheaply in experiments, but is challenging to use in a causal inference pipeline without extensive engineering and domain knowledge to extract underlying latent factors. The long term goal of causal representation learning is to find appropriate assumptions and methods to disentangle latent variables and learn the causal mechanisms that explain a system's behaviour. In this talk, I'll present results from a series of recent papers that describe how we can leverage assumptions about a system's causal mechanisms to provably disentangle latent factors. I will also talk about the limitations of a commonly used injectivity assumption, and discuss a hierarchy of settings that relax this assumption.
Speaker
Jason Hartford is currently a postdoc at Mila with Yoshua Bengio. Previously - PhD at UBC with Kevin Leyton-Brown. His research interest is focused on using deep learning for causal inference, and on designing deep network architectures for permutation invariant data.
McGill Statistics Seminar schedule: