Tutorial: Configuring a Virtual Instance for Deep Learning with GPUs / by Antonia Antonova

This tutorial will run through setting up an AWS EC2 Instance for Deep Learning from scratch - no ami or docker required. The tutorial can be used on Google Cloud's Virtual Machines as well, but it will only run through the initialization process on AWS.

After configuration, your instance will have Anaconda, Python 3.6, Jupyter Notebook, Tensorflow, and Keras running with a NVIDIA Graphics Driver, CUDA, and CUDNN.

I've run the tutorial on an instance with Ubuntu 16.04, but the configuration should work on other operating systems as well. Take care to download the appropriate links for your OS type.

Scroll down to the bottom if you want to test whether your GPUs are currently being utilized by Tensorflow.


Launch a new EC2 Instance.

Screen Shot 2017-09-06 at 2.48.42 AM.png

Select Ubuntu Server 16.04.

Screen Shot 2017-09-06 at 2.49.11 AM.png

Configure an instance type with GPUs. Currently Amazon's G2, G3, and P2 series offer graphics cards. Check out how the instance types differ here and their pricing here. You can also explore AWS' Elastic GPUs Service.

Give your instance the full 30 GBs of free storage.

Configure a custom security group that opens ports 22, 443, and 8888.

Screen Shot 2017-09-06 at 3.14.55 AM.png

Launch your instance.

Establish access through your terminal window.

We'll start by installing Anaconda and Python 3.6.

Go to Anaconda's Download Page to find the appropriate download link for your system. Here I used the Python 3.6 Linux Installer. Copy the link.

Screen Shot 2017-09-06 at 3.21.28 AM.png
[type bolded into your instance terminal window]

Download the package from the link above.

wget https://repo.continuum.io/archive/Anaconda3-4.4.0-Linux-x86_64.sh 

Install the package.

bash Anaconda3-4.4.0-Linux-x86_64.sh

Type 'yes' and press Return for everything - most importantly:

Do you wish the installer to prepend the Anaconda3 install location to PATH in your /home/ubuntu/.bashrc ? [yes|no] [no] >>> yes)

Run your .bashrc file.

source ~/.bashrc

Anaconda and Python 3.6 are installed!

Below, we launch ipython and change Jupyter Notebook's configuration file so that we can easily access it from our browser.

ipython

from IPython.lib import passwd

passwd()

Copy output somewhere safe: 'sha1:98ff0e580111:12798c72623a6eecd54b51c006b1050f0ac1a62d'

exit

jupyter notebook —generate-config

mkdir certs

cd certs

sudo openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem

There's no need to actually fill this in - just press Return.

cd ~/.jupyter/

vim jupyter_notebook_config.py

Copy & paste the text below into the config file you just opened. Make sure to replace the  ‘sha1:...’ password below with your own.

c = get_config()
# Kernel config
c.IPKernelApp.pylab = ‘inline’  # if you want plotting support always in your notebook
# Notebook config
c.NotebookApp.certfile = u’/home/ubuntu/certs/mycert.pem’ #location of your certificate file
c.NotebookApp.ip = ‘*’
c.NotebookApp.open_browser = False  #so that the ipython notebook does not opens up a browser by default
c.NotebookApp.password = u’sha1:f6082d64d955:fea94dee291c6c6db74e6a7a4f7c4bf8c834b22f’  #the encrypted password we generated above
# It is a good idea to put it on a known, fixed port
c.NotebookApp.port = 8888

Press :wq to save and quit the text editor.

Remove the Anaconda Installer from your computer.

rm -rf Anaconda3-4.4.0-Linux-x86_64.sh

Congrats! You've configured Jupyter Notebook for easy use. Now when you type 'jupyter notebook' into your instance terminal, it will open at port 8888. You can access it from your browser at https://[instance IP address]:8888/.

Now we're going to install the graphics driver needed to utilize your GPUs.

sudo lshw -C display

First check the type of graphics cards you have on your instance.

Look at your GPU information and find the appropriate NVIDIA Driver to download for it here.

Copy the download link.

Screen Shot 2017-09-06 at 3.36.28 AM.png

Download the installer.

wget http://us.download.nvidia.com/XFree86/Linux-x86_64/367.57/NVIDIA-Linux-x86_64-367.57.run

Update apt-get and install a C++ compiler system.

sudo apt-get update
sudo apt-get install gcc make

Install downloaded package.

sudo sh NVIDIA-Linux-x86_64-367.57.run

Check whether the installation worked. This will show you information on your GPU usage.

nvidia-smi

Now that the NVIDIA Driver is installed, we'll download a software package that let us interface with our GPUs.

First find the appropriate NVIDIA CUDA Package link for your OS system here.

For my Ubuntu 16.04, I configured the settings as such and chose to download the deb installer package.

Screen Shot 2017-09-06 at 3.48.21 AM.png

Copy the link into your instance terminal and download it.

wget https://developer.nvidia.com/compute/cuda/8.0/Prod2/local_installers/cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64-deb

Install the package.

sudo dpkg -i cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64-deb

Update apt-get one more time. Now you should be able to install the CUDA software.

sudo apt-get update
sudo apt-get install cuda
sudo apt install nvidia-cuda-toolkit

While CUDA is installing go to https://developer.nvidia.com/rdp/cudnn-download, make an account, and search for cuDNN's installation page.

cuDNN is a neural network library that optimizes your GPU usage for deep learning. 

Once you've gotten access to the download page check the CUDA version you installed to find out which cuDNN package to get.

nvcc —version

Because of the login requirement, you'll have to download the cuDNN package to your personal computer and scp it to your instance.

scp -i ~/.ssh/key.pem /Users/toni/Downloads/cudnn-7.5-linux-x64-v.tgz  ubuntu@ec2-54-214-117-208.us-west-2.compute.amazonaws.com:/home/ubuntu/

chmod +x cudnn-7.5-linux-x64-v.tgz  

tar -xzvf cudnn-7.5-linux-x64-v.tgz

sudo cp cuda/lib64/* /usr/local/cuda/lib64/  

sudo cp cuda/include/cudnn.h /usr/local/cuda/include/  

rm -rf ~/cuda

rm cudnn-7.5-linux-x64-v.tgz

rm cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64-deb

vim .bashrc

Copy & paste the text below into your .bashrc file.

# add cuda tools to command path
export PATH=/usr/local/cuda/bin:${PATH}
export MANPATH=/usr/local/cuda/man:${MANPATH}

# add cuda libraries to library path
if [[ “${LD_LIBRARY_PATH}” != “” ]]
then
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:${LD_LIBRARY_PATH}
else
export LD_LIBRARY_PATH=/usr/local/cuda/lib64
fi

Type :wq to save & quit the text editor.

source ~/.bashrc

You’re done installing the NVIDIA Graphics Card Driver, CUDA & cuDNN.

Now we can install Tensorflow and Keras!

pip install tensorflow-gpu

Let's check if Tensorflow is utilizing our GPUs.

python
import tensorflow as tf
# Creates a graph.
with tf.device(‘/cpu:0’):
 a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name=’a’)
 b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name=’b’)
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))

Check the output of the print statement to see if the GPUs are working.

exit()

Install Keras.

pip install keras

Congrats! You're EC2 is configured and ready to use for Deep Learning!