TensorFlow is an open source library for numerical computation, specializing in machine learning applications. In this codelab, you will learn how to install and run TensorFlow on a single machine, and will train a simple classifier to classify images of flowers.

What are we going to be building?

In this lab, we will be using transfer learning, which means we are starting with a model that has been already trained on another problem. We will then be retraining it on a similar problem. Deep learning from scratch can take days, but transfer learning can be done in short order.

We are going to use the Inception v3 network. Inception v3 is trained for the ImageNet Large Visual Recognition Challenge using the data from 2012, and it can differentiate between 1,000 different classes, like Dalmatian or dishwasher. We will use this same network, but retrain it to tell apart a small number of classes based on our own examples.

What you will learn

What you need

Optional: Manual install

Setup Docker

Install Docker

If you don't have docker installed already you can download the installer here.

Test your Docker installation

To test your Docker installation try running the following command in the terminal :

docker run hello-world

This should output some text starting with:

Hello from Docker!
This message shows that your installation appears to be working correctly.
...

Run and test the TensorFlow image

Now that you've confirmed that Docker is working, test out the TensorFlow image:

docker run -it tensorflow/tensorflow:1.1.0 bash

After downloading your prompt should change to root@xxxxxxx:/notebooks#.

Next check to confirm that your TensorFlow installation works by invoking Python from the container's command line:

# Your prompt should be "root@xxxxxxx:/notebooks" 
python

Once you have a python prompt, >>>, run the following code:

# python

import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session() # It will print some warnings here.
print(sess.run(hello))

This should print Hello TensorFlow! (and a couple of warnings after the tf.Session line).

Exit Docker

Now press Ctrl-d, on a blank line, once to exit python, and a second time to exit the docker image.

Now create the working directory:

mkdir tf_files

Then relaunch Docker with that directory shared as your working directory, and port number 6006 published for TensorBoard:

docker run -it \
  --publish 6006:6006 \
  --volume ${HOME}/tf_files:/tf_files \
  --workdir /tf_files \
  tensorflow/tensorflow:1.1.0 bash

Your prompt will change to root@xxxxxxxxx:/tf_files#

Before you start any training, you'll need a set of images to teach the network about the new classes you want to recognize. We've created an archive of creative-commons licensed flower photos to use initially.

Download the sample images:

curl -O http://download.tensorflow.org/example_images/flower_photos.tgz
tar xzf flower_photos.tgz

After downloading 218MB, you should now have a copy of the flower photos available in your working directory.

Now check the contents of the folder:

ls flower_photos

Optional: But I'm in a hurry!

Let's reduce the number of images per category, to speed up the training.

Since all the image file names start with digits, the following command will reduce the number of images by 70%:

ls flower_photos/roses | wc -l

rm flower_photos/*/[3-9]*

ls flower_photos/roses | wc -l

The retrain script is part of the tensorflow repo, but it is not installed as part of the pip package. So you need to download it manually, to the current directory:

curl -O https://raw.githubusercontent.com/tensorflow/tensorflow/r1.1/tensorflow/examples/image_retraining/retrain.py

At this point, we have a trainer, we have data, so let's train! We will train the Inception v3 network.

As noted in the introduction, Inception is a huge image classification model with millions of parameters that can differentiate a large number of kinds of images. We're only training the final layer of that network, so training will end in a reasonable amount of time.

Before starting the training, launch tensorboard in the background so you can monitor the training progress.

tensorboard --logdir training_summaries &

Start your image retraining with one big command (note the --summaries_dir option, sending training progress reports to the directory that tensorboard is monitoring) :

python retrain.py \
  --bottleneck_dir=bottlenecks \
  --how_many_training_steps=500 \
  --model_dir=inception \
  --summaries_dir=training_summaries/basic \
  --output_graph=retrained_graph.pb \
  --output_labels=retrained_labels.txt \
  --image_dir=flower_photos

This script downloads the pre-trained Inception v3 model, adds a new final layer, and trains that layer on the flower photos you've downloaded.

ImageNet was not trained on any of these flower species originally. However, the kinds of information that make it possible for ImageNet to differentiate among 1,000 classes are also useful for distinguishing other objects. By using this pre-trained network, we are using that information as input to the final classification layer that distinguishes our flower classes.

Optional: But I'm NOT in a hurry!

The above example iterates only 500 times. If you skipped the step where we deleted most of the training data and are training on the full dataset you can very likely get improved results (i.e. higher accuracy) by training for longer. To get this improvement, remove the parameter --how_many_training_steps to use the default 4,000 iterations.

# In Docker
python /tensorflow/tensorflow/examples/image_retraining/retrain.py \
  --bottleneck_dir=bottlenecks \
  --model_dir=inception \
  --summaries_dir=training_summaries/long \
  --output_graph=retrained_graph.pb \
  --output_labels=retrained_labels.txt \
  --image_dir=flower_photos

While You're Waiting: Bottlenecks

This section and the next are to be enjoyed while your classifier is training.

The first phase analyzes all the images on disk and calculates the bottleneck values for each of them. What's a bottleneck?

The Inception v3 model is made up of many layers stacked on top of each other, a simplified picture of it from TensorBoard is shown above (all the details are available in this paper, with a complete picture is shown on page 6). These layers are pre-trained and are already very valuable at finding and summarizing information that will help classify most images. For this codelab, you are training only the last layer (final_training_ops in the figure below). While all the previous layers retain their already-trained state.

In the above figure, the node labeled "softmax", on the left side, is the output layer of the original model. While the nodes on the right side were added by the retraining script. The extra nodes are automatically deleted by the retraining script when it finishes.

A "Bottleneck", is an informal term we often use for the layer just before the final output layer that actually does the classification. As, near the output, the representation is much more compact than in the main body of the network.

Every image is reused multiple times during training. Calculating the layers behind the bottleneck for each image takes a significant amount of time. Since these lower layers of the network are not being modified their outputs can be cached and reused.

So the script is running the constant part of the network, everything below the node labeled Bottlene... above, and caching the results.

The command you ran saves these files to the bottlenecks/ directory. If you rerun the script, they'll be reused, so you don't have to wait for this part again.

While You're Waiting: Training

Once the script finishes generating all the bottleneck files, the actual training of the final layer of the network begins.

The training operates efficiently by feeding the cached value for each image into the Bottleneck layer. The true label for each image is also fed into the node labeled GroundTruth. Just these two inputs are enough to calculate the classification probabilities, training updates, and the various performance metrics.

As it trains you'll see a series of step outputs, each one showing training accuracy, validation accuracy, and the cross entropy:

The figures below show an example of the progress of the model's accuracy and cross entropy as it trains. If your model has finished generating the bottleneck files you can check your model's progress by opening TensorBoard, and clicking on the figure's name to show them. TensorBoard may print out warnings to your command line. These can safely be ignored.

A true measure of the performance of the network is to measure its performance on a data set that is not in the training data. This performance is measured using the validation accuracy. If the training accuracy is high but the validation accuracy remains low, that means the network is overfitting, and the network is memorizing particular features in the training images that don't help it classify images more generally.

The training's objective is to make the cross entropy as small as possible, so you can tell if the learning is working by keeping an eye on whether the loss keeps trending downwards, ignoring the short-term noise.

By default, this script runs 4,000 training steps. Each step chooses 10 images at random from the training set, finds their bottlenecks from the cache, and feeds them into the final layer to get predictions. Those predictions are then compared against the actual labels to update the final layer's weights through a back-propagation process.

As the process continues, you should see the reported accuracy improve. After all the training steps are complete, the script runs a final test accuracy evaluation on a set of images that are kept separate from the training and validation pictures. This test evaluation provides the best estimate of how the trained model will perform on the classification task.

You should see an accuracy value of between 85% and 99%, though the exact value will vary from run to run since there's randomness in the training process. (If you are only training on two classes, you should expect higher accuracy.) This number value indicates the percentage of the images in the test set that are given the correct label after the model is fully trained.

The retraining script will write out a version of the Inception v3 network with a final layer retrained to your categories to tf_files/retrained_graph.pb and a text file containing the labels to tf_files/retrained_labels.txt.

These files are both in a format that the C++ and Python image classification examples can use, so you can start using your new model immediately.

Classifying an image

Here is a Python script that loads your new graph file and predicts with it.

label_image.py

import os, sys

import tensorflow as tf

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

# change this as you see fit
image_path = sys.argv[1]

# Read in the image_data
image_data = tf.gfile.FastGFile(image_path, 'rb').read()

# Loads label file, strips off carriage return
label_lines = [line.rstrip() for line 
                   in tf.gfile.GFile("retrained_labels.txt")]

# Unpersists graph from file
with tf.gfile.FastGFile("retrained_graph.pb", 'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())
    tf.import_graph_def(graph_def, name='')

with tf.Session() as sess:
    # Feed the image_data as input to the graph and get first prediction
    softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
    
    predictions = sess.run(softmax_tensor, \
             {'DecodeJpeg/contents:0': image_data})
    
    # Sort to show labels of first prediction in order of confidence
    top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]
    
    for node_id in top_k:
        human_string = label_lines[node_id]
        score = predictions[0][node_id]
        print('%s (score = %.5f)' % (human_string, score))

This is a little clumsy to cut-and-paste, so we've made a gist for you.

We'll create it as a file called label_image.py in the working directory. You can use curl to do this for you.

curl -L https://gdgdocs.org/r/3lTKZs > label_image.py

Now, run the Python file you created, first on a daisy:

flower_photos/daisy/21652746_cc379e0eea_m.jpg

Image by Retinafunk

python label_image.py flower_photos/daisy/21652746_cc379e0eea_m.jpg

And then on a rose:

flower_photos/roses/2414954629_3708a1a04d.jpg

Image by Lori Branham

python label_image.py flower_photos/roses/2414954629_3708a1a04d.jpg 

You will then see a list of flower labels, in most cases with the right flower on top (though each retrained model may be slightly different).

You might get results like this for a daisy photo:

daisy (score = 0.99071)
sunflowers (score = 0.00595)
dandelion (score = 0.00252)
roses (score = 0.00049)
tulips (score = 0.00032)

This indicates a high confidence it is a daisy, and low confidence for any other label.

You can use label_image.py to choose any image file to classify, either from your downloaded collection, or new ones.

The retraining script has several other command line options you can use. parameters you can try adjusting to see if they help your results.

You can read about these options in the help for the retrain script:

python retrain.py -h

Try adjusting some of these options to see if you can increase the final validation accuracy.

For example the --learning_rate parameter controls the magnitude of the updates to the final layer during training. So far we have left it out, using the value of 0.01. If this rate is smaller, like 0.005, the learning will take longer, but it can help the overall precision. Higher values, like 1.0, could train faster, but may obtain worse final results, or even become unstable.

You need to experiment carefully to see what works for your case.

After you see the script working on the flower example images, you can start looking at teaching the network to recognize categories you care about instead.

In theory, all you need to do is run the tool, specifying a particular set of sub-folders. Each sub-folder is named after one of your categories and contains only images from that category.

If you complete this step and pass the root folder of the subdirectories as the argument for the --image_dir parameter, the script should train the images that you've provided, just like it did for the flowers.

The classification script uses the folder names as label names, and the images inside each folder should be pictures that correspond to that label, as you can see in the flower archive:

Collect as many pictures of each label as you can and try it out!

Congratulations, you've taken your first steps into a larger world of deep learning!

You can see more about using TensorFlow at the TensorFlow website or the TensorFlow Github project. There are lots of other resources available for TensorFlow, including a discussion group and whitepaper.

If you make a trained model that you want to run in production, you should also check out TensorFlow Serving, an open source project that makes it easier to manage TensorFlow projects.

If you're interested in running TensorFlow on mobile devices try the second part of this tutorial which will show you how to optimize your model to run on android.

This codelab is based on Pete Warden's TensorFlow for Poets blog post and this retraining tutorial.