Active learning for Bayesian neural networks in image classification
Název práce v češtině: | Aktivní učení Bayesovských neuronových sítí pro klasifikaci obrazu |
---|---|
Název v anglickém jazyce: | Active learning for Bayesian neural networks in image classification |
Klíčová slova: | strojové učení, hluboké učení, neuronové sítě, aktivní učení, klasifikace obrazu, Bayesovské učení, Bayesovské neuronové sítě |
Klíčová slova anglicky: | machine learning, deep learning, neural networks, active learning, image classification, Bayesian learning, Bayesian neural networks |
Akademický rok vypsání: | 2019/2020 |
Typ práce: | diplomová práce |
Jazyk práce: | angličtina |
Ústav: | Katedra teoretické informatiky a matematické logiky (32-KTIML) |
Vedoucí / školitel: | Tomáš Šabata |
Řešitel: | skrytý![]() |
Datum přihlášení: | 20.01.2020 |
Datum zadání: | 20.01.2020 |
Datum potvrzení stud. oddělením: | 24.01.2020 |
Datum a čas obhajoby: | 14.09.2020 08:30 |
Datum odevzdání elektronické podoby: | 27.07.2020 |
Datum odevzdání tištěné podoby: | 30.07.2020 |
Datum proběhlé obhajoby: | 14.09.2020 |
Oponenti: | Mgr. Marta Vomlelová, Ph.D. |
Konzultanti: | prof. RNDr. Ing. Martin Holeňa, CSc. |
Zásady pro vypracování |
The goal of this thesis is to survey various approximate Bayesian deep learning methods
and comparatively evaluate possible applications of uncertainty estimates they provide in active learning. - Study deep neural networks for image classification - Study and describe methodology background of Bayesian neural networks - Study uncertainty sampling active learning framework methodology for image classification with neural networks - Compare various approximate Bayesian deep learning methods in setting of uncertainty sampling active learning framework - Evaluate and compare these methods on a commonly-used set of image data in terms of model performance and complexity of training and inference |
Seznam odborné literatury |
B. Settles. Active learning literature survey. University of Wisconsin-Madison Department of Computer Sciences, 2009.
I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. International Conference on Machine Learning. 2016, pages 1050-1059. C. Blundell, et al. Weight uncertainty in neural networks. International Conference on Machine Learning, 2015, pages 1613-1622. |
Předběžná náplň práce |
Deep learning has shown good performance in various machine learning tasks, especially image classification. However, deep learning methods typically require large amounts of labelled data, which can be expensive to obtain.
Active learning methods aim to mitigate this issue by automatically selecting the best training examples to be labelled from a larger set of unlabelled data, which tend to be readily available. The most commonly-used active learning framework relies on uncertainty estimates for selecting training examples to be labelled. However, traditional neural networks do not provide reliable estimates of prediction confidence. Bayesian machine learning infers whole distributions over model parameters instead of point estimates of model parameters used in traditional machine learning, which can be utilized to estimate uncertainties of model predictions. However, exact Bayesian learning is intractable for any practical neural network. Various methods to approximate the exact posterior distributions over model parameters with simpler distributions have been proposed and successfully applied to neural networks. |
Předběžná náplň práce v anglickém jazyce |
Deep learning has shown good performance in various machine learning tasks, especially image classification. However, deep learning methods typically require large amounts of labelled data, which can be expensive to obtain.
Active learning methods aim to mitigate this issue by automatically selecting the best training examples to be labelled from a larger set of unlabelled data, which tend to be readily available. The most commonly-used active learning framework relies on uncertainty estimates for selecting training examples to be labelled. However, traditional neural networks do not provide reliable estimates of prediction confidence. Bayesian machine learning infers whole distributions over model parameters instead of point estimates of model parameters used in traditional machine learning, which can be utilized to estimate uncertainties of model predictions. However, exact Bayesian learning is intractable for any practical neural network. Various methods to approximate the exact posterior distributions over model parameters with simpler distributions have been proposed and successfully applied to neural networks. |