PV021 Neural networks  Exam manual  This manual specifies knowledge demanded at the PV021 exam. Please, keep in mind that the knowledge described below is ​mandatory​ even for the ​E​ grade. Missing a single part automatically means ​F​. You are supposed to know everything from the course. However, in some parts you have to know all details of all formulae, all proofs etc. Here they are: - Slides 17 - 54: You need to know all definitions, formally, using the mathematical notation. You need to be able to explain and demonstrate the geometric interpretation of a neuron. Keep in mind that I may demand complete proofs (e.g. slide 43), or at least the basic idea (slide 45). You do not need to memorize the exact numbers from the slide 52 but know the claims about computability. - Slides 76 - 99. Everything in great detail including the stuff on the greenboard only (+ the additional material in IS). You need to be able to provide all mathematical details as well as understanding of the fundamental notions such as maximum likelihood and all proofs. - 101 - 113. Everything in great detail including all details of all proofs (even those only on the greenboard). You also need to be able to explain what the formulae mean. - 115 - 162. All details of all observations and methods except: - Slide 126: Just know roughly what the theorem says. - Slide 134: Do not have to memorize all the schedules, just understand that there is a scheduling and be able to demonstrate one schedule. - Slide 143: You do not need to memorize 1.7159 but have to know why the number is this. - Slide 149: You do not have to memorize all the ReLU variants, just know one. - Slides 150, 151: No need to memorize, fine to know. Let me stress that apart from the above exceptions, you need to have a detailed knowledge including the mathematical formulae (e.g. for the momentum, AdaGrad etc.) and intuitive understanding. - Slides 203 - 215: Everything in great detail including all the stuff on the greenboard. You may be asked to derive the backpropagation algorithm for CNN even though it did not explicitly appear in the lecture (but is similar to the derivation for MLP). It also helps to know the intuition for CNN from the applications (slides 176 - 201). - Slides 217 - 231: All details including all the stuff on the greenboard. You may be asked to derive the backpropagation algorithm for RNN even though it did not explicitly appear in the lecture (but is similar to the derivation for MLP). You have to know how LSTM is defined (formally and intuitively). - Slides 242 - 265: Everything including all mathematical details except: - You do not have to memorize the slide 244 (Equilibrium). However, you have to know and understand the contents of the slide 245. - You do not have to memorize the slides 258 - 263. - Slides 278 - 301: Everything with all mathematical details. A video of the lecture from the last year has been uploaded to the study materials.