|
||
In Machine Learning one develops mathematical methods for modeling data structures, which express
dependency between observables, and designs efficient learning algorithms for estimation of such dependency.
The most advanced part of Machine Learning is statistical learning theory that takes into account our incomplete
information of observables, using probability theory, or preferably, using measure theory and functional analysis. In
this way we not only unveil hidden structure of data but also make a prediction for the future.
Last update: Šmíd Dalibor, Mgr., Ph.D. (13.05.2022)
|
|
||
1. Getting involved is a prerequisite for participate in the exam.
2. Questions in the exam correspond to the syllabus of the subject to the extent it was presented at the lecture. Alternatively, students can choose a term paper assignment.
3. The final mark takes account for an active participation in the lecture. Last update: Le Hong Van, Ph.D. (12.09.2021)
|
|
||
1. S. Shalev-Shwart and S. Ben-David, Understanding Machine Learning: From Theory to Algorithms, Cambridge University Press, 2014.
2. M. Mohri, A. Rostamizadeh, A. Talwalkar, Foundations of Machine Learning,MIT Press, second Edition, 2018.
3. L. Deveroye, L. Gy\"orfi and G. Lugosi, A Probabilistic Theory of Pattern Recognition, Springer 1996.
4. Lecture notes ``Mathematical foundations of machine learning"
Last update: Le Hong Van, Ph.D. (12.09.2021)
|
|
||
1. Statistical models of machine learning. 2. Supervised learning, unsupervised learning. 3. Generalization ability of machine learning. 4. Neural networks and deep learning. 5. Bayesian machine learning and Bayesian networks. Last update: Le Hong Van, Ph.D. (12.09.2021)
|