Good lectures, good MT talks references. I would have liked some better technical explanation on the MERT part of the lecture, and not being pushed to read the original paper for understanding the process but, besides this, it was very well structured.
Mgr. Jindřich Libovický, Ph.D. [32-UFAL], Zpracování přirozeného jazyka [NPFL124, přednáška]
Great lectures, very good summarized. He managed to get only the essential part from the papers regarding the CNN, LTSMs and Transformers emphasizing the important parts. He also highlighted well, what is the problem we are addressing, what are the limitation, and so on.
doc. RNDr. Pavel Pecina, Ph.D. [32-UFAL], Zpracování přirozeného jazyka [NPFL124, přednáška]
Good lecture, and the principles were nice explained. The second part of it it was a bit too theoretical, if you ask me, so I would try to attach more examples to the probabilities formulae if possible.
RNDr. Daniel Zeman, Ph.D. [32-UFAL], Zpracování přirozeného jazyka [NPFL124, přednáška]
The lectures were very poor documented: I tried to read the slides and the recorded lecture to understand the concept of 2 level grammar, and it's use in the transducer models, but after 2 watches I still couldn't understand anything. My solution, in this case was to take a look at Jurafsky book about speech and text translation which was much better documented, provided examples for a lot of interesting cases. This was mostly for answering the exam questions: regarding the surface and the lexical language level, I could not understand, what is the input and what is the output? What is the task we want to solve? What is the problem, what is the issue? etc... It was more like a very poor documented solution and extremely disappointing.
doc. Ing. Zdeněk Žabokrtský, Ph.D. [32-UFAL], Zpracování přirozeného jazyka [NPFL124, přednáška]
These lectures were a bit superficial in my opinion because they were more about presenting a lot of formulae and not expressing exactly some examples with each one so I had to search on Google for some. Also, I didn't like there was too much focus on data resources (an entire lecture) instead of expressing more motivation about why do we need them, why is this a linguistically problem? (in co-reference corpora for example)
Připomínka k předmětu, Zpracování přirozeného jazyka [NPFL124, přednáška]
It was great that it provided more inside knowledge from 5 NLP approaches and they were merged in a nice fashion. I also appreciate the possible question for the exam posted. I may say that I would prefer that this lecture would had the option of going only through some implementation tasks without being forced to take the exam.