Thesis (Selection of subject)Thesis (Selection of subject)(version: 368)
Thesis details
   Login via CAS
Rotation-equivariant convolutional neural network for design of visual prosthetic stimulation protocol
Thesis title in Czech: Rotačně ekvivarentní konvoluční sítě pro tvorbu vizuálních prostetických stimulačních protokolů
Thesis title in English: Rotation-equivariant convolutional neural network for design of visual prosthetic stimulation protocol
Key words: hluboké neuronové sítě|výpočetní neurověda|rotačně ekvivariantní CNN|konvoluční neuronové sítě|kortikální protéza|stimulační protokol
English key words: deep neural networks|computational neuroscience|rotation-equivariant CNN|convolutional neural network|cortical prosthetics|stimulation protocol
Academic year of topic announcement: 2021/2022
Thesis type: Bachelor's thesis
Thesis language: angličtina
Department: Department of Software and Computer Science Education (32-KSVI)
Supervisor: Mgr. Ján Antolík, Ph.D.
Author: Bc. Martin Picek - assigned and confirmed by the Study Dept.
Date of registration: 13.10.2021
Date of assignment: 21.10.2021
Confirmed by Study dept. on: 26.04.2022
Date and time of defence: 12.09.2022 09:00
Date of electronic submission:22.07.2022
Date of submission of printed version:22.07.2022
Date of proceeded defence: 12.09.2022
Opponents: Mgr. Martin Pilát, Ph.D.
 
 
 
Guidelines
Visual prostheses are a promising technology to restore a functional form of vision in blind people. Such devices, many of which are currently under clinical trials, are typically implanted in retina, LGN or visual cortex (V1) and evoke visual percepts via direct stimulation of neurons. This works aims at inform future design of V1 cortical prosthesis.
Human's V1 presents a characteristic spatial organization of information encoding (retinotopic mapping, orientations domains, blobs etc.) such that, in a given cortical column, neuron's presents very stereotyped encoding features. It is only thanks to such organization that it is possible to stimulate neurons in a localized region in an indiscriminate way and evoke predictable percepts.
Current stimulation protocols stimulate neurons taking into account only retinotopy (a continuous map between visual field on V1), but disregards any other spatial organization of information encoding.
Such approach represents, obviously, a limiting factor: a high quality form of vision can be recovered only via stimulating targeted neurons according to their encoding properties.
The current state of prosthetic technology is close to achieve a stimulation resolution roughly matching the size of orientations domains. This will allow the next generation of prosthetic stimulation protocols to stimulate neurons in a given position according to not only their position, but also their orientation preference. It becomes then important to understand how much it is possible to predict firing rate of a neuron, knowing only its position and orientation preference. This work aims precisely at providing an answer to this question via a specialized deep learning approach, based on a rotational equivariant convolutional core whose parameters are constrained on experimental data of V1 neuron responses to images. The predictions of this model will be eventually evaluated in comparison with a simpler baseline model rooted on more classical approaches.
References
1. Daniel A. Butts. Data-Driven Approaches to Understanding Visual Neuron Activity (2019). Annual Reviews of Vision Neuroscience, 5:20.1–20.27
2. Cadena, S. A., Denfield, G. H., Walker, E. Y., Gatys, L. A., Tolias, A. S., Bethge, M., & Ecker, A. S. (2017). Deep convolutional models improve predictions of macaque V1 responses to natural images. bioRxiv, 201764. https://doi.org/10.1101/201764
3. Jan Antolık, Cyril Monier1, Yves Fregnac, and Andrew P. Davison (2019). A comprehensive data-driven model of cat primary visual cortex. BiorXiv
4. Alexander S. Ecker, Fabian H. Sinz, Emmanouil Froudarakis, Paul G. Fahey, Santiago A. Cadena, Edgar Y. Walker, Erick Cobos, Jacob Reimer, Andreas S. Tolias, Matthias Bethge (2018) A rotation-equivariant convolutional neural network model of primary visual cortex. arXiv
 
Charles University | Information system of Charles University | http://www.cuni.cz/UKEN-329.html