TensorFlow is an Open Source software from Google for numerical computation using a graph representation:

  • Vertex (nodes) represent mathematical operations
  • Edges represent N-dimensional data array (tensors)

TensorFlow runs on CPU or GPU (using CUDA®). The architecture is flexible and highly scalable. It can be deployed on smartphones, desktop/servers, or even servers cluster.

Installation

CPU Only

This is the simplest way for installing TensorFlow since it doesn’t require Nvidia CUDA. This is the priviledged way for fast installation.

GPU support

TensorFlow, as any matrix calculus computation framework, runs faster on GPU. Some prerequisites are needed:

  • GPU card with CUDA Compute Capability 3.0 or higher
  • Nvidia® drivers
  • CUDA® Toolkit 8.0
  • cuDNN 6

Installation methods

TensorFlow can be installed through multiples mechanisms:

  • From sources (Linux, Mac OS X, Windows)
  • pip (Linux, Mac OS X, Windows)
  • Virtualenv (Linux, Mac OS X)
  • Anaconda (Linux, Mac OS X, Windows)
  • docker (Linux, Mac OS X)

Pip will install TensorFlow library on your python environment. Virtualenv and Anaconda will allow you to install TensorFlow with a dedicated python distribution, hence without interacting with your “system” python environment. Docker uses container technology for isolation.

We will use docker-based CPU-only TensorFlow. Please visit TensorFlow Installation Guide if you choose another method.

You only need one command to boostrap TensorFlow with docker. To start the container and get inside:

It is also possible to execute a script with:

To start the Jupyter Notebook along TensorFlow, run:

It will output the link to the Jupyter Notebook:

Here is the description of some parameters as well as additionals one which may be useful:

  • -p ${host_port}:${container_port} the port available on your computer which will redirect to the port inside the container used by Jupyter Notebook.

  • -v $PWD:/tmp Expose the temporary directory of the container into your current working directory.

  • –name ${some_container_name}`: This will assign a name to your created container. Thus you will be able to stop/start the container using the assigned name. This is handy if you plan to re-use this tensorflow container.

  • --rm: This will automatically destroy the container when it stops (when you kill it)

That’s it! You have a working, ready-to-go, tensorflow node.