SubjectsSubjects(version: 945)
Course, academic year 2023/2024
   Login via CAS
Utilizing large language models - AMLV00081
Title: Využití velkých jazykových modelů
Guaranteed by: Institute of the Czech National Corpus (21-UCNK)
Faculty: Faculty of Arts
Actual: from 2023
Semester: both
Points: 3
E-Credits: 3
Examination process:
Hours per week, examination: 0/2, C [HT]
Capacity: winter:unknown / 10 (10)
summer:unknown / unknown (10)
Min. number of students: unlimited
4EU+: no
Virtual mobility / capacity: no
Key competences:  
State of the course: taught
Language: Czech
Teaching methods: full-time
Teaching methods: full-time
Level:  
Note: course can be enrolled in outside the study plan
enabled for web enrollment
you can enroll for the course in winter and in summer semester
Guarantor: PhDr. Jiří Milička, Ph.D.
Teacher(s): PhDr. Jiří Milička, Ph.D.
Annotation -
Last update: PhDr. Jiří Milička, Ph.D. (26.10.2023)
<br>
At this point, a human in synergy with a machine is still better in the vast majority of creative activities than the machine alone. Therefore, it makes sense to perfect ourselves in this synergy. This seminar will focus on large language models (LLMs), which emerged at the end of the 2010s, gained popularity with the arrival of Chat GPT, and will probably stay with us.<br>
<br>
We will create the structure of the seminar together on the spot. As I don't know what topics will interest you and what tools will be released during the semester, the latent space of possible syllabi is too wide to describe here.<br>
We'll probably start by understanding how language models work in general and how transformers (the architecture they are based on) function, how the base models are further improved, what is finetuning, RLHF, and so on.<br>
<br>
But then we'll plunge into the whirlwind of practical demonstrations of how to work effectively with models.<br>
How to find the right simulacrum and create the right environment for it.<br>
How to make the simulacrum work consistently across a wide range of different tasks.<br>
Anthropomorphization and demonomorphization.<br>
Useful memes.<br>
Gaslighting and other manipulative techniques in "prompt ingeneering".<br>
Prompt injecting, jailbreaking, the Waluigi effect.<br>
Ethics and notkilleveryoneism.<br>
Plugins for ChatGPT.<br>
<br>
Perhaps we will remember this seminar with a slight irony, similar to people who attended seminars like "How to use Google correctly" in 2003, but I kind of hope that we will remember it.<br>
Course completion requirements -
Last update: PhDr. Jiří Milička, Ph.D. (12.08.2023)

Active participation in class, completion of homework (usually this will involve thinking in advance about things we will work on in the next class), and at the end of the semester, the development of a prompt that addresses something interesting and non-trivial.

Literature -
Last update: PhDr. Jiří Milička, Ph.D. (12.08.2023)

Wolfram, S. (2023). What Is ChatGPT Doing … and Why Does It Work? https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

arXive papers:

Vinu Sankar Sadasivan, Aounon Kumar, Sriram Balasubramanian, Wenxiao Wang, & Soheil Feizi. (2023). Can AI-Generated Text be Reliably Detected?.

Jennifer Haase, & Paul H. P. Hanel. (2023). Artificial muses: Generative Artificial Intelligence Chatbots Have Risen to Human-Level Creativity.

Viet Dac Lai, Nghia Trung Ngo, Amir Pouran Ben Veyseh, Hieu Man, Franck Dernoncourt, Trung Bui, & Thien Huu Nguyen. (2023). ChatGPT Beyond English: Towards a Comprehensive Evaluation of Large Language Models in Multilingual Learning.

Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, & Yi Zhang. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4.

Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, & Tushar Khot. (2023). Complexity-Based Prompting for Multi-Step Reasoning.

Twitter profiles
@goodside, @gwern, @repligate @anthrupad, @tszzl, @nearcyan, @yacineMTB, @sama, @ilyasut, @ykilcher, @SchmidhuberAI, @fchollet, @ylecun, @MParakhin @geoffreyhinton,@robertskmiles, @DanHendrycks

Entry requirements -
Last update: PhDr. Jiří Milička, Ph.D. (12.08.2023)

The course is intended for students of linguistics and philological disciplines with an interest in linguistics. No specific prerequisites are required.

 
Charles University | Information system of Charles University | http://www.cuni.cz/UKEN-329.html