R Interface to Core TensorFlow API

Travis-CI Build Status CRAN_Status_Badge

Overview

The core TensorFlow API is composed of a set of Python modules that enable constructing and executing TensorFlow graphs. The tensorflow package provides access to the complete TensorFlow API from within R.

This set of articles describes the use of the core low-level TensorFlow API. There are additionally two higher-level interfaces available (both of which are also documented on this website):

  1. Keras — High-level neural networks API developed with a focus on enabling fast experimentation. Keras features a user-friendly API that makes it easy to quickly prototype deep learning models. It also has built-in support for convolutional networks (for computer vision), recurrent networks (for sequence processing), and any combination of both.

  2. TF Estimators — High-level API that provides canned implementations of many model types including linear models, support vector machines, deep neural networks, state saving recurrent neural networks, and dynamic recurrent neural networks.

Depending on your application it may be more appropriate to use one of these higher level APIs rather than the lower-level core TensorFlow API described here.

Installation

To get started, install the tensorflow R package from CRAN as follows:

install.packages("tensorflow")

Then, use the install_tensorflow() function to install TensorFlow:

library(tensorflow)
install_tensorflow()

You can confirm that the installation succeeded with:

sess = tf$Session()
hello <- tf$constant('Hello, TensorFlow!')
sess$run(hello)

This will provide you with a default installation of TensorFlow suitable for getting started with the tensorflow R package. See the article on installation to learn about more advanced options, including installing a version of TensorFlow that takes advantage of Nvidia GPUs if you have the correct CUDA libraries installed.

Example

Here’s a simple example of making up some data in two dimensions and then fitting a line to it:

library(tensorflow)

# Create 100 phony x, y data points, y = x * 0.1 + 0.3
x_data <- runif(100, min=0, max=1)
y_data <- x_data * 0.1 + 0.3

# Try to find values for W and b that compute y_data = W * x_data + b
# (We know that W should be 0.1 and b 0.3, but TensorFlow will
# figure that out for us.)
W <- tf$Variable(tf$random_uniform(shape(1L), -1.0, 1.0))
b <- tf$Variable(tf$zeros(shape(1L)))
y <- W * x_data + b

# Minimize the mean squared errors.
loss <- tf$reduce_mean((y - y_data) ^ 2)
optimizer <- tf$train$GradientDescentOptimizer(0.5)
train <- optimizer$minimize(loss)

# Launch the graph and initialize the variables.
sess = tf$Session()
sess$run(tf$global_variables_initializer())

# Fit the line (Learns best fit is W: 0.1, b: 0.3)
for (step in 1:201) {
  sess$run(train)
  if (step %% 20 == 0)
    cat(step, "-", sess$run(W), sess$run(b), "\n")
}

The first part of this code builds the data flow graph. TensorFlow does not actually run any computation until the session is created and the run function is called.

MNIST Tutorials

To whet your appetite further, we suggest you check out what a classical machine learning problem looks like in TensorFlow. In the land of neural networks the most “classic” classical problem is the MNIST handwritten digit classification. We offer two introductions here, one for machine learning newbies, and one for pros. If you’ve already trained dozens of MNIST models in other software packages, please take the red pill. If you’ve never even heard of MNIST, definitely take the blue pill. If you’re somewhere in between, we suggest skimming blue, then red.

Images licensed CC BY-SA 4.0; original by W. Carter

If you’re already sure you want to learn and install TensorFlow you can skip these and charge ahead. Don’t worry, you’ll still get to see MNIST – we’ll also use MNIST as an example in our technical tutorial where we elaborate on TensorFlow features.