Thesis (Selection of subject)Thesis (Selection of subject)(version: 368)
Thesis details
   Login via CAS
Modern evolutionary strategies for reinforcement learning problems
Thesis title in Czech: Moderní evoluční strategie pro problémy zpětnovazebního učení
Thesis title in English: Modern evolutionary strategies for reinforcement learning problems
Key words: evoluční strategie|zpětnovazební učení|hledání novosti|neuroevolutce
English key words: evolutionary strategies|reinforcement learning|novelty search|neuroevolution
Academic year of topic announcement: 2020/2021
Thesis type: Bachelor's thesis
Thesis language: angličtina
Department: Department of Theoretical Computer Science and Mathematical Logic (32-KTIML)
Supervisor: Mgr. Roman Neruda, CSc.
Author: hidden - assigned and confirmed by the Study Dept.
Date of registration: 12.07.2021
Date of assignment: 12.07.2021
Confirmed by Study dept. on: 27.08.2021
Date and time of defence: 10.09.2021 09:00
Date of electronic submission:22.07.2021
Date of submission of printed version:22.07.2021
Date of proceeded defence: 10.09.2021
Opponents: Mgr. Klára Pešková, Ph.D.
 
 
 
Guidelines
Evolutionary strategies (ES) represent traditional field of evolutionary computing targeted at continuous optimization problems. Recently, they have been used successfully in the context of neuroevolution applied to reinforcement learning. The goal of the thesis is to explore modern ES techniques for reinforcement learning problems. The student will explore and realize algorithms based on existing approach including novelty search, CMA-ES and natural evolution strategies. The design of the algorithms will be verified on set of practical experiments on benchmark reinforcement learning tasks.
References
[1] Richard S. Sutton, Andrew G. Barto. Reinforcement Learning: An Introduction. (2nd ed). The MIT Press, 2018.

[2] Günter Rudolph. Evolutionary Strategies. In: Handbook of Natural Computing. (Grzegorz Rozenberg, Thomas Bäck, Joost N. Kok, eds), Springer, 673-698, 2012.

[3] P. A. Vikhar. Evolutionary algorithms: A critical review and its future prospects. In: 2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC). 261-265. 2016.

[4] Joel Lehman, Kenneth O. Stanley. Abandoning Objectives: Evolution Through the Search for Novelty Alone, Evolutionary computation, 19, 189-223, 2011.

[5] Tim Salimans et al. Evolution Strategies as a Scalable Alternative to Reinforcement Learning. 2017. arXiv: 1703.03864 [stat.ML].

[6] Edoardo Conti et al. Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents. 2018. arXiv: 1712.06560 [cs.AI].
Preliminary scope of work
Evoluční strategie reprezentují jeden z přístupů k řešení problémů zpětnovazebného učení. V této práci jsou zkoumány dvě moderní evoluční strategie, jmenovitě OpenAI-ES a NS-ES (a její rozšíření) využívající hledání novosti. Chování těchto algoritmů je zkoumáno na dvou benchmark prostředích pro zpětnovazebné učení, Cartpole-swingup a Slimevolley. V Cartpole-swingup je většina přístupů schopna úspěšně vyřešit problém, zatímco chování ve Slimevolley je volatilní a citlivé na úvodní seed v porovnání s tradičními metodami jako CMA-ES.
Preliminary scope of work in English
Evolutionary strategies represent one of approaches to solve reinforcement learning tasks. This thesis explores two modern evolutionary strategies, namely OpenAI-ES and NS-ES (and extensions) utilizing novelty search. The performance of these algorithms is studied in two benchmark reinforcement learning environments, Cartpole-swingup and Slimevolley. On Cartpole-swingup most approaches are able to solve the problem successfully, while the performance on the Slimevolley task is volatile and sensitive to initial seed, in comparison to more traditional approaches, such as CMA-ES.
 
Charles University | Information system of Charles University | http://www.cuni.cz/UKEN-329.html