Témata prací (Výběr práce)Témata prací (Výběr práce)(verze: 368)
Detail práce
   Přihlásit přes CAS
Knowledge representation in deep neural networks
Název práce v češtině: Reprezentace znalostí v hlubokých neuronových sítích
Název v anglickém jazyce: Knowledge representation in deep neural networks
Klíčová slova: hluboké neuronové sítě|konvoluční neurónové sítě|extrakce znalostí|reprezentace znalostí|optimalizace architektury|výběr příznaků
Klíčová slova anglicky: deep neural networks|convolutional neural networks|knowledge extraction|knowledge representation|architecture optimization|feature extraction
Akademický rok vypsání: 2021/2022
Typ práce: diplomová práce
Jazyk práce: angličtina
Ústav: Katedra teoretické informatiky a matematické logiky (32-KTIML)
Vedoucí / školitel: doc. RNDr. Iveta Mrázová, CSc.
Řešitel: skrytý - zadáno a potvrzeno stud. odd.
Datum přihlášení: 03.12.2021
Datum zadání: 03.12.2021
Datum potvrzení stud. oddělením: 23.12.2021
Datum a čas obhajoby: 12.06.2023 09:00
Datum odevzdání elektronické podoby:02.05.2023
Datum odevzdání tištěné podoby:09.05.2023
Datum proběhlé obhajoby: 12.06.2023
Oponenti: Mgr. Klára Pešková, Ph.D.
 
 
 
Zásady pro vypracování
The student shall review the following topics in his diploma thesis:

- overview and mutual comparison of various approaches applicable to deep neural networks, in particular, convolutional neural networks and their variants (e.g., ResNet, DenseNet, InceptionNet, and XceptionNet),

- visualization and interpretation of knowledge extracted by the network (e.g., by means of t-SNE, UMAP, or Ivis, among others),

- recapitulation of known paradigms relevant to feature extraction and architecture optimization (e.g., saliency maps, adversarial patterns, genetic algorithms, and pruning).

The student will focus on some of these topics in more detail. Further, he will propose a suitable strategy for reliable real-time object detection and classification of, e.g., transport vehicles, and will implement the models. The evaluation of the obtained results and gained experience shall form an important part of the thesis.
Seznam odborné literatury
1. Some of the textbooks available for the chosen area of research, e.g.:
- I. Goodfellow, Y. Bengio, A. Courville: Deep Learning, The MIT Press, (2016).
- S. Haykin: Neural Networks and Learning Machines, 3rd edition, Pearson, (2009).

2. Journal papers and other publications:
- Y. LeCun, L. Bottou, Y. Bengio, P. Haffner: Gradient-Based Learning Applied to Document Recognition, in: Proc. of the IEEE, vol. 86, no. 11 (Nov. 1998), pp. 2278-2324.
- K. He, X. Zhang, S. Ren, and J. Sun: Deep Residual Learning for Image Recognition, in: arXiv:1512.03385v1, (2015).
- G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger: Densely Connected Convolutional Networks, in: arXiv:1608.06993v5, (2018).
- Ch. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich: Going Deeper with Convolutions, in: Proc. of CVPR 2015, IEEE, (2015), pp. 1-9.
- F. Chollet: Xception: Deep Learning with Depthwise Separable Convolutions, in: arXiv:1610.02357v3, (2017).
- L. van der Maaten, G. Hinton: Visualizing High-Dimensional Data Using t-SNE, in: Journal of Machine Learning Research, Vol. 9, (2008), pp. 2579–2605.
- L. McInnes, J. Healy, and J. Melville: UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction, in: arXiv:1802.03426v3, (2020).
- B. Szubert, J. E. Cole, C. Monaco, and I. Drozdov: Structure-preserving visualisation of high dimensional single-cell datasets, in: Sci Rep 9, 8914 (2019). https://doi.org/10.1038/s41598-019-45301-0.
- K. Simonyan, A. Vedaldi, and A. Zisserman: Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, in: arXiv:1312.6034v2, (2014).
- J. Yosinski, J. Clune, Y. Bengio, and H. Lipson: How transferable are features in deep neural networks?, in: arXiv:1411.1792v1, (2014).
- A. Nguyen, J. Yosinski, and J. Clune: Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, in: CVPR 2015, IEEE, (2015), pp. 427-436.
- R. Miikkulainen, J. Liang, E. Meyerson, A. Rawal, D. Fink, O. Francon, B. Raju, H. Shahrzad, A. Navruzyan, N. Duffy, and B. Hodjat: Evolving Deep Neural Networks, Chapter 15 in: R. Kozma et al. (eds.): Artificial Intelligence in the Age of Neural Networks and Brain Computing, Elsevier, (2018).
- S.-K. Yeom, P. Seegerer, S. Lapuschkin, S. Wiedemann, K.-R. Muller, and W. Samek: Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning, in: arXiv:1912.08881v1, (2019).

3. Relevant articles from leading academic journals:
Neurocomputing, Neural Networks, IEEE Transactions on Neural Networks and Learning Systems, etc.

 
Univerzita Karlova | Informační systém UK