This is the website for the Deep Learning course of Master 1 AI.
You can contact me at caio.corro@upsud.fr, either in French or English, with a subject starting with [OPT11]. Please, do not worry about typos or not being overly formal enough (just treat your instructors and colleagues with the same respect you would like to be treated).
There should be 2 or 3 students per group. Deadlines are hard and you cannot change groups.
Lab exercises comprise two parts:
So the question is: what do you need to write in the report? There are no specific instruction! You must think about the report as an essay: the objective of the report is that you convince us that you understand the theoretical foundation of the model and how to implement it in practice. Use your own word and notations, try to process the course and the lab exercise and explain them to us. Do not write handwavy explanation. You should probably use Latex for this.
Content:
Length: 36 pages if double columns, a little bit longer otherwise, but these are not hard limits  you can do less, you can do a little more. Just don’t write too much, go to the essential. Formal notations with minimum writing to be understandable. Just convince me that you know what you are talking about. :)
Scoring: as long as you do the work seriously, that you commented the code you wrote and you submit a nice report, we will give you a good grade. Do not worry if you did not succeed to do everything or if you didn’t understand something. Explain in the report what you did not succeed so we can see you did some effort.
Frequently asked questions?
You only need to submit the code of the second lab exercise, but to succeed the second one you must succeed the first one…
Hard deadline: April, 7
Send me the notebook and the report (PDF) by email, one email per group with names written explicitly in the email, the notebook and the report (3 points if you don’t do that) and with title “Deep learning  lab exercise 2” (2 points if you don’t do that): caio.corro [at] upsud.fr
This lab is not graded and must not be submitted. However, you need to carefully work it if you want to succeed the project.
Hard deadline: April, 29
You need to submit one report (PDF format, 36 pages) and one notebook. No zip/rar or other compressed files. The grad will be mainly based on the quality of the report. Expectations are:
The VAE that we will develop is based on the following generative story:
where the latent representations z take value in R^n. The prior ditribution p(z) is a multivariate Gaussian where each coordinate is independent. We fix the mean and variance of each coordinate to 0 and 1, respectively. The conditional distribution p(x  z ; \theta) is parameterized by a neural network: it is the decoder! The generated pixels x are independent Gaussians with a fixed variance.
Note: this kind of VAE will be quite bad at generating MNIST picture. Therefore, when you do you experiments, you should both generate picture and display the mean parameters of the output distributions. This is a well known problem of VAE, you can try to play with the network architecture and the parameters to improve generation.
Although the decoder is similar to the autoencoder decoder, the encoder is different: it must return two tensors, the tensor of means and the tensor of variances. As the variance of a Gaussian distribution is constrained to be strictly positive, it is usual to instead return the logvariance (or log squared variance), which is unconstrained. If you exponentiate the logvariance, you get the variance which will be strictly positive as the exponential function only returns positive values.
Similarly to the autoencoder, there is several hyperparameters you can try to tune. However, for the VAE I strongly advise you to:


To compute the training loss, you must compute two terms:
For the reconstruction loss, you can use the mean square error loss.
To sample values, you can use the reparameterization trick as follows:


The formula of the KL divergence with the prior is as follows:


WARNING: you mu carefull check yourself that you mean over elements of your batches correctly so both loss functions have the correct “proportion”. You may need to do something like torch.sum(…, dim=1) before calling torch.mean(…) for the KL divergence: think about it and explain in your report.


It is quite useful to visualize the latent space both for the autoencoder and the variational autoencoder. You can visualize it either for the training data or the dev data. Note that if you want to visualize a latent space when its dimension is greater than two (useful for the first part!), you could project it in 2 dimensions using PCA (its already implemented in scikitlearn!)

