Thesis (Selection of subject)Thesis (Selection of subject)(version: 391)
Thesis details
   Login via CAS
Hluboké strojové učení a reprezentace významu
Thesis title in Czech: Hluboké strojové učení a reprezentace významu
Thesis title in English: Deep Learning and Meaning Representation
Academic year of topic announcement: 2016/2017
Thesis type: dissertation
Thesis language:
Department: Institute of Formal and Applied Linguistics (32-UFAL)
Supervisor: doc. RNDr. Ondřej Bojar, Ph.D.
Author: hidden - assigned and confirmed by the Study Dept.
Date of registration: 20.09.2017
Date of assignment: 20.09.2017
Confirmed by Study dept. on: 05.10.2017
Guidelines
Deep learning, i.e. machine learning based on neural networks with hidden layers, has recently overcome the main obstacle of long inputs and especially long outputs and it has succeeded in many areas of text and natural language processing (NLP). While the end-to-end performance as measured on unseen test sets is unexpectedly good in many settings, neural models remain relatively opaque and hard to interpret and debug. At the same time, in-depth understanding is essential if we want to propose further improvements in performance.

The goal of the thesis is to study one or more of the more complex tasks in natural language processing, e.g. machine translation, text summarization, question answering, including multi-task settings (e.g. learning to translate and parse the source sentence at the same time), as performed by current deep-learning approaches and propose techniques that could be useful in interpreting of the learned models. The range of possible diagnostic methods is very broad, from targeted visualization of high-dimensional vector spaces over analysis of various network parameters and their development during the training or application of the network, test suites dedicated to various linguistic phenomena, comparison of network behavior on contrastive pairs of input items to additional network structures or altering network parameters for diagnostic purposes. Linguistic theories of meaning representation can be useful in many possible ways in such interpretations and linguistic data can serve as diverse test sets at minimum. Based on the results of the analysis, improvements applicable in various of the NLP tasks (carried out alone or jointly) are to be sought for.
References
Ian Goodfellow, Yoshua Bengio, Aaron Courville. Deep Learning. MIT Press. 2016.
http://www.deeplearningbook.org/

Phil Blunsom et al. Materials for the Deep Natural Language Processing course offered in Hilary Term 2017, University of Oxford.
https://github.com/oxford-cs-deepnlp-2017/lectures

J. Li, W. Monroe, and D. Jurafsky. Understanding Neural Networks through Representation Erasure. ArXiv e-prints, December 2016.
https://arxiv.org/abs/1612.08220

Ondřej Bojar. Čeština a strojový překlad. ÚFAL, Praha, Czechia, ISBN 978-80-904571-4-0, 168 pp. 2012.
 
Charles University | Information system of Charles University | http://www.cuni.cz/UKEN-329.html