{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Students:\n",
"\n",
"- ...\n",
"- ...\n",
"- ...\n",
"\n",
"# Practical classes\n",
"\n",
"\n",
"All exercices will be in Python. It is important that you keep track of exercices and structure you code correctly (e.g. create funcions that you can re-use later)\n",
"\n",
"We will use Jupyter notebooks (formerly known as IPython). You can read the following courses for help:\n",
"* Python and numpy: http://cs231n.github.io/python-numpy-tutorial/\n",
"* Jupyter / IPython : http://cs231n.github.io/ipython-tutorial/\n",
"\n",
"\n",
"# Neural network: first experiments with a linear model\n",
"\n",
"In this first lab exercise we will code a neural network using numpy, without a neural network library.\n",
"Next week, the lab exercise will be to extend this program with hidden layers and activation functions.\n",
"\n",
"The task is digit recognition: the neural network has to predict which digit in $\\{0...9\\}$ is written in the input picture. We will use the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, a standard benchmark in machine learning.\n",
"\n",
"The model is a simple linear classifier $o = \\operatorname{softmax}(Wx + b)$ where:\n",
"* $x$ is an input image that is represented as a column vector, each value being the \"color\" of a pixel\n",
"* $W$ and $b$ are the parameters of the classifier\n",
"* $\\operatorname{softmax}$ transforms the output weight (logits) into probabilities\n",
"* $o$ is column vector that contains the probability of each category\n",
"\n",
"We will train this model via stochastic gradient descent by minimizing the negative log-likelihood of the data:\n",
"$$\n",
" \\hat{W}, \\hat{b} = \\operatorname{argmin}_{W, b} \\sum_{x, y} - \\log p(y | x)\n",
"$$\n",
"Although this is a linear model, it classifies raw data without any manual feature extraction step."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# import libs that we will use\n",
"import os\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import math\n",
"\n",
"# To load the data we will use the script of Gaetan Marceau Caron\n",
"# You can download it from the course webiste and move it to the same directory that contains this ipynb file\n",
"import dataset_loader\n",
"\n",
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 1. Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Download mnist dataset \n",
"if(\"mnist.pkl.gz\" not in os.listdir(\".\")):\n",
" # this link doesn't work any more,\n",
" # seach on google for the file \"mnist.pkl.gz\"\n",
" # and download it\n",
" !wget http://deeplearning.net/data/mnist/mnist.pkl.gz\n",
"\n",
"# if you have it somewhere else, you can comment the lines above\n",
"# and overwrite the path below\n",
"mnist_path = \"./mnist.pkl.gz\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# load the 3 splits\n",
"train_data, dev_data, test_data = dataset_loader.load_mnist(mnist_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Each dataset is a list with two elemets:\n",
"* data[0] contains images\n",
"* data[1] contains labels\n",
"\n",
"Data is stored as numpy.ndarray. You can use data[0][i] to retrieve image number i and data[1][i] to retrieve its label."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(type(train_data))\n",
"print(type(train_data[0]))\n",
"print(type(train_data[1]))\n",
"print(type(train_data[0][0]))\n",
"print(type(train_data[1][0]))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"index = 900\n",
"label = train_data[1][index]\n",
"picture = train_data[0][index]\n",
"\n",
"print(\"label: %i\" % label)\n",
"plt.imshow(picture.reshape(28,28), cmap='Greys')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Question:** What are the characteristics of training data? (number of samples, dimension of input, number of labels)\n",
"\n",
"The documentation of ndarray class is available here: https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def getDimDataset(data):\n",
" n_training = data[0].shape[0]\n",
" n_feature = data[0].shape[1]\n",
" n_label = len(set(data[1][i] for i in range(len(data[1]))))\n",
" return n_training, n_feature, n_label"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"getDimDataset(train_data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 1. Building functions\n",
"\n",
"We now need to build functions that are required for the neural network.\n",
"$$\n",
" o = \\operatorname{softmax}(Wx + b) \\\\\n",
" L(x, y) = -\\log p(y | x) = -\\log o[y]\n",
"$$\n",
"\n",
"Note that in numpy, operator @ is used for matrix multiplication while * is used for element-wise multiplication.\n",
"The documentation for linear algebra in numpy is available here: https://docs.scipy.org/doc/numpy/reference/routines.linalg.html\n",
"\n",
"The first operation is the affine transformation $v = Wx + b$.\n",
"To compute the gradient, it is often convenient to write the forward pass as $v[i] = b[i] + \\sum_j W[i, j] x[j]$."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Input:\n",
"# - W: projection matrix\n",
"# - b: bias\n",
"# - x: input features\n",
"# Output:\n",
"# - vector\n",
"def affine_transform(W, b, x):\n",
" v = # TODO\n",
" return v\n",
"\n",
"# Input:\n",
"# - W: projection matrix\n",
"# - b: bias\n",
"# - x: input features\n",
"# - g: incoming gradient\n",
"# Output:\n",
"# - g_W: gradient wrt W\n",
"# - g_b: gradient wrt b\n",
"def backward_affine_transform(W, b, x, g):\n",
" g_W = # TODO\n",
" g_b = # TODO\n",
" return g_W, g_b"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The next cell is a (too simple) test of affine_transform and backward_affine_transform.\n",
"It should run without error if your implementation is correct."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"W = np.asarray([[ 0.63024213, 0.53679375, -0.92079597],\n",
" [-0.1155045, 0.62780356, -0.67961305],\n",
" [ 0.08465286, -0.06561815, -0.39778322],\n",
" [ 0.8242268, 0.58907262, -0.52208052],\n",
" [-0.43894227, -0.56993247, 0.09520727]])\n",
"b = np.asarray([ 0.42706842, 0.69636598, -0.85611933, -0.08682553, 0.83160079])\n",
"x = np.asarray([-0.32809223, -0.54751413, 0.81949319])\n",
"\n",
"o_gold = np.asarray([-0.82819732, -0.16640748, -1.17394705, -1.10761496, 1.36568213])\n",
"g = np.asarray([-0.08938868, 0.44083873, -0.2260743, -0.96196726, -0.53428805])\n",
"g_W_gold = np.asarray([[ 0.02932773, 0.04894156, -0.07325341],\n",
" [-0.14463576, -0.24136543, 0.36126434],\n",
" [ 0.07417322, 0.12377887, -0.18526635],\n",
" [ 0.31561399, 0.52669067, -0.78832562],\n",
" [ 0.17529576, 0.29253025, -0.43784542]])\n",
"g_b_gold = np.asarray([-0.08938868, 0.44083873, -0.2260743, -0.96196726, -0.53428805])\n",
"\n",
"\n",
"# quick test of the forward pass\n",
"o = affine_transform(W, b, x)\n",
"if o.shape != o_gold.shape:\n",
" raise RuntimeError(\"Unexpected output dimension: got %s, expected %s\" % (str(o.shape), str(o_gold.shape)))\n",
"if not np.allclose(o, o_gold):\n",
" raise RuntimeError(\"Output of the affine_transform function is incorrect\")\n",
" \n",
"# quick test if the backward pass\n",
"g_W, g_b = backward_affine_transform(W, b, x, g)\n",
"if g_W.shape != g_W_gold.shape:\n",
" raise RuntimeError(\"Unexpected gradient dimension for W: got %s, expected %s\" % (str(g_W.shape), str(g_W_gold.shape)))\n",
"if g_b.shape != g_b_gold.shape:\n",
" raise RuntimeError(\"Unexpected gradient dimension for b: got %s, expected %s\" % (str(g_b.shape), str(g_b_gold.shape)))\n",
"if not np.allclose(g_W, g_W_gold):\n",
" raise RuntimeError(\"Gradient of W is incorrect\")\n",
"if not np.allclose(g_b, g_b_gold):\n",
" raise RuntimeError(\"Gradient of b is incorrect\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The softmax function:\n",
"$$\n",
" o = \\operatorname{softmax}(w)\n",
"$$\n",
"where $w$ is a vector of logits in $\\mathbb R$ and $o$ a vector of probabilities such that:\n",
"$$\n",
" o[i] = \\frac{\\exp(w[i])}{\\sum_j \\exp(w[j])}\n",
"$$\n",
"We do not need to implement the backward for this experiment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Input:\n",
"# - x: vector of logits\n",
"# Output\n",
"# - vector of probabilities\n",
"def softmax(x):\n",
" # TODO"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**WARNING:** is your implementation numerically stable?\n",
"\n",
"The $\\exp$ function results in computations that overflows (i.e. results in numbers that cannot be represented with floating point numbers).\n",
"Therefore, it is always convenient to use the following trick to improve stability: https://timvieira.github.io/blog/post/2014/02/11/exp-normalize-trick/"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Example for testing the numerical stability of softmax\n",
"# It should return [1., 0. ,0.], not [nan, 0., 0.]\n",
"z = [1000000,1,100]\n",
"print(softmax(z))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Question**: from the result of the cell above, what can you say about the softmax output, even when it is stable?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Just too simple test for the softmax function\n",
"x = np.asarray([0.92424884, -0.92381088, -0.74666024, -0.87705478, -0.54797015])\n",
"y_gold = np.asarray([0.57467369, 0.09053556, 0.10808233, 0.09486917, 0.13183925])\n",
"\n",
"y = softmax(x)\n",
"if not np.allclose(y, y_gold):\n",
" raise RuntimeError(\"Output of the softmax function is incorrect\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we build the loss function and its gradient for training the network.\n",
"\n",
"The loss function is the negative log-likelihood defined as:\n",
"$$\n",
" \\mathcal L(x, gold) = -\\log \\frac{\\exp(x[gold])}{\\sum_j \\exp(x[j])} = -x[gold] + \\log \\sum_j \\exp(x[j])\n",
"$$\n",
"This function is also called the cross-entropy loss (in Pytorch, different names are used dependending if the inputs are probabilities or raw logits).\n",
"\n",
"Similarly to the softmax, we have to rely on the log-sum-exp trick to stabilize the computation: https://timvieira.github.io/blog/post/2014/02/11/exp-normalize-trick/"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Input:\n",
"# - x: vector of logits\n",
"# - gold: index of the gold class\n",
"# Output:\n",
"# - scalare equal to -log(softmax(x)[gold])\n",
"def nll(x, gold):\n",
" # TODO\n",
"\n",
"# Input:\n",
"# - x: vector of logits\n",
"# - gold: index of the gold class\n",
"# - gradient (scalar)\n",
"# Output:\n",
"# - gradient wrt x\n",
"def backward_nll(x, gold, g):\n",
" g_x = # TODO\n",
" return g_x"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# test\n",
"x = np.asarray([-0.13590009, -0.83649656, 0.03130881, 0.42559402, 0.08488182])\n",
"y_gold = 1.5695014420179738\n",
"g_gold = np.asarray([ 0.17609875, 0.08739591, -0.79185107, 0.30875221, 0.2196042 ])\n",
"\n",
"y = nll(x, 2)\n",
"g = backward_nll(x, 2, 1.)\n",
"\n",
"if not np.allclose(y, y_gold):\n",
" raise RuntimeError(\"Output is incorrect\")\n",
"\n",
"if g.shape != g_gold.shape:\n",
" raise RuntimeError(\"Unexpected gradient dimension: got %s, expected %s\" % (str(g.shape), str(g_gold.shape)))\n",
"if not np.allclose(g, g_gold):\n",
" raise RuntimeError(\"Gradient is incorrect\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following code test the implementation of the gradient using finite-difference approximation, see: https://timvieira.github.io/blog/post/2017/04/21/how-to-test-gradient-implementations/\n",
"\n",
"Your implementation should pass this test."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# this is python re-implementation of the test from the Dynet library\n",
"# https://github.com/clab/dynet/blob/master/dynet/grad-check.cc\n",
"\n",
"def is_almost_equal(grad, computed_grad):\n",
" #print(grad, computed_grad)\n",
" f = abs(grad - computed_grad)\n",
" m = max(abs(grad), abs(computed_grad))\n",
"\n",
" if f > 0.01 and m > 0.:\n",
" f /= m\n",
"\n",
" if f > 0.01 or math.isnan(f):\n",
" return False\n",
" else:\n",
" return True\n",
"\n",
"def check_gradient(function, weights, true_grad, alpha = 1e-3):\n",
" # because input can be of any dimension,\n",
" # we build a view of the underlying data with the .shape(-1) method\n",
" # then we can access any element of the tensor as a elements of a list\n",
" # with a single dimension\n",
" weights_view = weights.reshape(-1)\n",
" true_grad_view = true_grad.reshape(-1)\n",
" for i in range(weights_view.shape[0]):\n",
" old = weights_view[i]\n",
"\n",
" weights_view[i] = old - alpha\n",
" value_left = function(weights).reshape(-1)\n",
"\n",
" weights_view[i] = old + alpha\n",
" value_right = function(weights).reshape(-1)\n",
"\n",
" weights_view[i] = old\n",
" grad = (value_right - value_left) / (2. * alpha)\n",
"\n",
" if not is_almost_equal(grad, true_grad_view[i]):\n",
" return False\n",
"\n",
" return True"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Test the affine transformation\n",
"\n",
"x = np.random.uniform(-1, 1, (5,))\n",
"W = np.random.uniform(-1, 1, (3, 5))\n",
"b = np.random.uniform(-1, 1, (3,))\n",
"\n",
"for i in range(3):\n",
" y = affine_transform(W, b, x)\n",
" g = np.zeros_like(y)\n",
" g[i] = 1.\n",
" g_W, _ = backward_affine_transform(W, b, x, g)\n",
" print(check_gradient(lambda W: affine_transform(W, b, x)[i], W, g_W))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# test the negative likelihood loss\n",
"\n",
"x = np.random.uniform(-1, 1, (5,))\n",
"\n",
"for gold in range(5):\n",
" y = nll(x, gold)\n",
" g_y = backward_nll(x, gold, 1.)\n",
"\n",
" print(check_gradient(lambda x: nll(x, gold), x, g_y))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 2. Parameter initialization\n",
"\n",
"We are now going to build the function that will be used to initialize the parameters of the neural network before training.\n",
"Note that for parameter initialization you must use **in-place** operations:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# create a random ndarray\n",
"a = np.random.uniform(-1, 1, (5,))\n",
"\n",
"# this does not change the data of the ndarray created above!\n",
"# it creates a new ndarray and replace the reference stored in a\n",
"a = np.zeros((5, ))\n",
"\n",
"# this will change the underlying data of the ndarray that a points to\n",
"a[:] = 0\n",
"\n",
"# similarly, this creates a new array and change the object pointed by a\n",
"a = a + 1\n",
"\n",
"# while this change the underlying data of a\n",
"a += 1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For an affine transformation, it is common to:\n",
"* initialize the bias to 0\n",
"* initialize the projection matrix with Glorot initialization (also known as Xavier initialization)\n",
"\n",
"The formula for Glorot initialization can be found in equation 16 (page 5) of the original paper: http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def zero_init(b):\n",
" b[:] = 0.\n",
"\n",
"def glorot_init(W):\n",
" # TODO"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 3. Building and training the neural network\n",
"\n",
"In our simple example, creating the neural network is simply instantiating the parameters $W$ and $b$.\n",
"They must be ndarray object with the correct dimensions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def create_parameters(dim_input, dim_output):\n",
" W = # TODO\n",
" b = # TODO\n",
" \n",
" return W, b"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The recent success of deep learning is (partly) due to the ability to train very big neural networks.\n",
"However, researchers became interested in building small neural networks to improve computational efficiency and memory usage.\n",
"Therefore, we often want to compare neural networks by their number of parameters, i.e. the size of the memory required to store the parameters."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def print_n_parameters(W, b):\n",
" n = # TODO\n",
" print(\"Number of parameters: %i\" % (n))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now create the neural network and print its number of parameters:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dim_input = # TODO\n",
"dim_output = # TODO\n",
"W, b = create_parameters(dim_input, dim_output)\n",
"print_n_parameters(W, b)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, the training loop!\n",
"\n",
"The training loop should be structured as follows:\n",
"* we do **epochs** over the data, i.e. one epoch is one loop over the dataset\n",
"* at each epoch, we first loop over the data and update the network parameters with respect to the loss gradient\n",
"* at the end of each epoch, we evaluate the network on the dev dataset\n",
"* after all epochs are done, we evaluate our network on the test dataset and compare its performance with the performance on dev\n",
"\n",
"During training, it is useful to print the following information:\n",
"* the mean loss over the epoch: it should be decreasing!\n",
"* the accuracy on the dev set: it should be increasing!\n",
"* the accuracy on the train set: it shoud be increasing!\n",
"\n",
"If you observe a decreasing loss (+increasing accuracy on test data) but decreasing accuracy on dev data, your network is overfitting!\n",
"\n",
"Once you have build **and tested** this a simple training loop, you should introduce the following improvements:\n",
"* instead of evaluating on dev after each loop on the training data, you can also evaluate on dev n times per epoch\n",
"* shuffle the data before each epoch\n",
"* instead of memorizing the parameters of the last epoch only, you should have a copy of the parameters that produced the best value on dev data during training and evaluate on test with those instead of the parameters after the last epoch\n",
"* learning rate decay: if you do not observe improvement on dev, you can try to reduce the step size\n",
"\n",
"After you conducted (successful?) experiments, you should write a report with results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# before training, we initialize the parameters of the network\n",
"zero_init(b)\n",
"glorot_init(W)\n",
"\n",
"n_epochs = 5 # number of epochs\n",
"step = 0.01 # step size for gradient updates\n",
"\n",
"for epoch in range(n_epochs):\n",
" # TODO\n",
" # ...\n",
" \n",
"# Test evaluation\n",
"# TODO"
]
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 1
}