Thesis (Selection of subject)Thesis (Selection of subject)(version: 368)
Thesis details
   Login via CAS
Knowledge Extraction with Deep Belief Networks
Thesis title in Czech: Extrakce znalostí pomocí DBN-sítí
Thesis title in English: Knowledge Extraction with Deep Belief Networks
Key words: DBN-sítě|RBM-sítě|extrakce znalostí|reprezentace znalostí|prořezávání|optimalizace architektury
English key words: deep belief networks|restricted Boltzmann machines|knowledge extraction|knowledge representation|pruning|architecture optimization
Academic year of topic announcement: 2022/2023
Thesis type: Bachelor's thesis
Thesis language: angličtina
Department: Department of Theoretical Computer Science and Mathematical Logic (32-KTIML)
Supervisor: doc. RNDr. Iveta Mrázová, CSc.
Author: Bc. Jan Bronec - assigned and confirmed by the Study Dept.
Date of registration: 20.01.2023
Date of assignment: 24.01.2023
Confirmed by Study dept. on: 15.02.2023
Date and time of defence: 29.06.2023 09:00
Date of electronic submission:10.05.2023
Date of submission of printed version:10.05.2023
Date of proceeded defence: 29.06.2023
Opponents: RNDr. Věra Flídrová
 
 
 
Guidelines
The student shall review the following topics in his thesis:

- recapitulation of the paradigms applicable to training of deep belief networks, in particular, RBM-networks, contrastive divergence, DBN-networks, and fine-tuning,
- overview and mutual comparison of various approaches applicable to knowledge extraction with neural networks, e.g., rule extraction/insertion, pruning, and visualization using UMAP,

The student will focus on some of these topics in more detail. Further, he will propose a suitable strategy for reliable classification of the presented objects, e.g., real-world image data, and will implement the models. Evaluating the obtained results and the gained experience shall form an essential part of the thesis.
References
1. Some of the textbooks available for the chosen area of research, e.g.:
- C. C. Aggarwal: Neural Networks and Deep Learning: A Textbook, Springer (2018).
- S. Marsland: Machine Learning: An Algorithmic Perspective, 2nd Edition, Chapman & Hall/CRC (2015).

2. Journal papers and other publications:
- S.-K. Chao, Z. Wang, Y. Xing, and G. Cheng: Directional Pruning of Deep Neural Networks, in: Advances in Neural Information Processing Systems, vol. 33, Curran Associates, Inc. (2020), pp. 13986--13998.
- X. Dong, S. Chen, and S. Pan: Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon, in: Advances in Neural Information Processing Systems, vol. 30, Curran Associates, Inc. (2017), 11 p.
- T. Hoefler, D. Alistarh, T. Ben-Nun, N. Dryden, and A. Peste: Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks, in: Journal of Machine Learning Research, vol. 23 (2021), pp. 1-124.
- L. McInnes, J. Healy, and J. Melville: UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction, in: arXiv:1802.03426v3, (2020).
- M. Pietron, M. Wielgosz: Retrain or Not Retrain? - Efficient Pruning Methods of Deep CNN Networks, in: LNCS 12139 (2020), pp. 452-463.
- S. N. Tran, A. S. d´Avila Garcez: Deep Logic Networks: Inserting and Extracting Knowledge From Deep Belief Network, in: IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 2 (Feb. 2018), pp. 246-258.
- H. Wang, C. Qin, Y. Bai, Y. Zhang, and Y. Fu: Recent Advances on Neural Network Pruning at Initialization, in: Proc. of IJCAI-22, IJCAI Organization (2022), pp.5638-5645.

3. Relevant articles from leading academic journals:
Neurocomputing, Neural Networks, IEEE Transactions on Neural Networks and Learning Systems, etc.
 
Charles University | Information system of Charles University | http://www.cuni.cz/UKEN-329.html