Integrating Bayesian Deep Learning Uncertainties in Medical Image Analysis
Speaker: Raghav Mehta
SupervisedÌý by Prof. Tal Arbel
Thesis defence
Abstract: Although Deep Learning (DL) models have been shown to perform very well on various medical imaging tasks, inference in the presence of pathology presents several challenges to common models. These challenges impede the integration of DL models into real clinical workflows. Deployment of these models into real clinical contexts requires: (1) that the confidence in DL model predictions be accurately expressed in the form of uncertainties and (2) that they exhibit robustness and fairness across different sub-populations. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Similarly, by embedding uncertainty estimates across cascaded inference tasks, prevalent in medical image analysis, performance on the downstream inference tasks should also be improved. In this thesis, we develop an uncertainty quantification score for the task of Brain Tumour Segmentation. We evaluate the score's usefulness during the two challenges, BraTS 2019 and BraTS 2020.Ìý Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, highlighting the need for uncertainty quantification in medical image analyses. We further show the importance of uncertainty estimates in medical image analysis by propagating uncertainty generated by upstream tasks into the downstream task of interest. Our results on three different clinically relevant tasks indicate that uncertainty propagation helps improve the performance of the downstream task of interest. Additionally, we combine the aspect of uncertainty estimates with fairness across demographic subgroups into the picture. With extensive experiments on multiple tasks, we show that popular ML methods for achieving fairness across different subgroups, such as data-balancing and distributionally robust optimization, succeed in terms of the model performances for some of the tasks. However, this can come at the cost of poor uncertainty estimates associated with the model predictions. This tradeoff must be mitigated if fairness models are to be adopted in medical image analysis. In the last part of the thesis, we look at Active Learning (AL) for reduced manual labeling of a dataset. Specifically, we present an information-theoretic AL framework that guides the optimal selection of images for labeling. Results indicate that the proposed framework outperforms several existing methods, and by careful design choices, it can be integrated into existing DL classifiers with minimal computational overhead.
Ìý