Loading image data

    Note: this is the R version of this tutorial in the TensorFlow oficial webiste.

    This tutorial provides a simple example of how to load an image dataset using tfdatasets.

    The dataset used in this example is distributed as directories of images, with one class of image per directory.

    Setup

    library(keras)
    library(tfdatasets)

    Retrieve the images

    Before you start any training, you will need a set of images to teach the network about the new classes you want to recognize. You can use an archive of creative-commons licensed flower photos from Google.

    Note: all images are licensed CC-BY, creators are listed in the LICENSE.txt file.

    After downloading (218MB), you should now have a copy of the flower photos available.

    The directory contains 5 sub-directories, one per class:

    images <- list.files(data_dir, pattern = ".jpg", recursive = TRUE)
    length(images)
    ## [1] 3670
    classes <- list.dirs(data_dir, full.names = FALSE, recursive = FALSE)
    classes
    ## [1] "daisy"      "dandelion"  "roses"      "sunflowers" "tulips"

    Load using tfdatasets

    To load the files as a TensorFlow Dataset first create a dataset of the file paths:

    list_ds <- file_list_dataset(file_pattern = paste0(data_dir, "/*/*"))
    ## tf.Tensor(b'/Users/dfalbel/.keras/datasets/flower_photos/dandelion/5909154147_9da14d1730_n.jpg', shape=(), dtype=string)

    Write a short pure-tensorflow function that converts a file paths to an (image_data, label) pair:

    Use dataset_map to create a dataset of image, label pairs:

    # num_parallel_calls are going to be autotuned
    labeled_ds <- list_ds %>% 
      dataset_map(preprocess_path, num_parallel_calls = tf$data$experimental$AUTOTUNE)
    ## Warning: Negative numbers are interpreted python-style when subsetting tensorflow tensors.(they select items by counting from the back). For more details, see: https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.indexing.html#basic-slicing-and-indexing
    ## To turn off this warning, set 'options(tensorflow.extract.warn_negatives_pythonic = FALSE)'

    Let’s see what the output looks like:

    ## [[1]]
    ## tf.Tensor(
    ## [[[8.6834738e-03 2.6610646e-02 0.0000000e+00]
    ##   [9.2436988e-03 2.1008406e-02 0.0000000e+00]
    ##   [8.4033636e-03 2.0168070e-02 0.0000000e+00]
    ##   ...
    ##   [1.2549020e-01 1.6862746e-01 0.0000000e+00]
    ##   [1.2408963e-01 1.6722688e-01 0.0000000e+00]
    ##   [1.1540600e-01 1.5854326e-01 0.0000000e+00]]
    ## 
    ##  [[5.8292076e-03 2.3756379e-02 0.0000000e+00]
    ##   [9.2436988e-03 2.1008406e-02 0.0000000e+00]
    ##   [8.1438841e-03 1.9908590e-02 0.0000000e+00]
    ##   ...
    ##   [1.2886350e-01 1.7200075e-01 3.3732879e-03]
    ##   [1.2772234e-01 1.7085959e-01 3.6327033e-03]
    ##   [1.2189305e-01 1.6503030e-01 7.7835715e-04]]
    ## 
    ##  [[9.0423673e-03 2.6969539e-02 0.0000000e+00]
    ##   [1.2683825e-02 2.4448531e-02 0.0000000e+00]
    ##   [1.1317654e-02 2.3082361e-02 0.0000000e+00]
    ##   ...
    ##   [1.2569161e-01 1.6882886e-01 4.4706254e-04]
    ##   [1.2579970e-01 1.6893695e-01 4.8144278e-04]
    ##   [1.2240889e-01 1.6554613e-01 1.0315585e-04]]
    ## 
    ##  ...
    ## 
    ##  [[6.2745102e-02 7.4509807e-02 0.0000000e+00]
    ##   [6.3054614e-02 7.4819319e-02 0.0000000e+00]
    ##   [6.6946782e-02 7.8957208e-02 0.0000000e+00]
    ##   ...
    ##   [7.8396991e-02 1.0192641e-01 0.0000000e+00]
    ##   [7.6219887e-02 9.9749304e-02 0.0000000e+00]
    ##   [8.1512690e-02 1.0504211e-01 3.0813182e-03]]
    ## 
    ##  [[6.3033998e-02 7.5025700e-02 0.0000000e+00]
    ##   [6.5369286e-02 7.7133991e-02 0.0000000e+00]
    ##   [6.6946782e-02 7.8711487e-02 0.0000000e+00]
    ##   ...
    ##   [7.8151330e-02 1.0168075e-01 0.0000000e+00]
    ##   [7.8431375e-02 1.0196079e-01 0.0000000e+00]
    ##   [8.1285693e-02 1.0481511e-01 2.8543193e-03]]
    ## 
    ##  [[6.6666670e-02 8.1512608e-02 0.0000000e+00]
    ##   [6.6666670e-02 7.8431375e-02 0.0000000e+00]
    ##   [6.6946782e-02 7.8711487e-02 0.0000000e+00]
    ##   ...
    ##   [7.8151330e-02 1.0168075e-01 0.0000000e+00]
    ##   [7.8431375e-02 1.0196079e-01 0.0000000e+00]
    ##   [7.8431375e-02 1.0196079e-01 0.0000000e+00]]], shape=(224, 224, 3), dtype=float32)
    ## 
    ## [[2]]
    ## tf.Tensor([0. 1. 0. 0. 0.], shape=(5,), dtype=float32)

    Training a model

    To train a model with this dataset you will want the data:

    • To be well shuffled.
    • To be batched.
    • Batches to be available as soon as possible.

    These features can be easily added using tfdatasets.

    First, let’s define a function that prepares a dataset in order to feed to a Keras model.

    Now let’s define a Keras model to classify the images:

    model <- keras_model_sequential() %>% 
      layer_flatten() %>% 
      layer_dense(units = 128, activation = "relu") %>% 
      layer_dense(units = 128, activation = "relu") %>% 
      layer_dense(units = 5, activation = "softmax")
    
    model %>% 
      compile(
        loss = "categorical_crossentropy",
        optimizer = "adam",
        metrics = "accuracy"
      )

    We can then fit the model feeding the dataset we just created:

    Note We are fitting this model as an example of how to the pipeline built with Keras. In real use cases you should always use validation datasets in order to verify your model performance.

    ## Epoch 1/5
    ## 115/115 - 12s - loss: 8.1625 - accuracy: 0.3302
    ## Epoch 2/5
    ## 115/115 - 13s - loss: 2.2350 - accuracy: 0.4060
    ## Epoch 3/5
    ## 115/115 - 14s - loss: 2.2305 - accuracy: 0.4128
    ## Epoch 4/5
    ## 115/115 - 13s - loss: 1.4776 - accuracy: 0.4689
    ## Epoch 5/5
    ## 115/115 - 13s - loss: 1.3211 - accuracy: 0.4831