Keras Backend
Overview
Keras is a modellevel library, providing highlevel building blocks for developing deep learning models. It does not handle itself lowlevel operations such as tensor products, convolutions and so on. Instead, it relies on a specialized, welloptimized tensor manipulation library to do so, serving as the “backend engine” of Keras.
The R interface to Keras uses TensorFlow™ as it’s default tensor backend engine, however it’s possible to use other backends if desired. At this time, Keras has three backend implementations available:
TensorFlow is an opensource symbolic tensor manipulation framework developed by Google.
Theano is an opensource symbolic tensor manipulation framework developed by LISA Lab at Université de Montréal.
CNTK is an opensource toolkit for deep learning developed by Microsoft.
Selecting a Backend
Keras uses the TensorFlow backend by default. If you want to switch to Theano or CNTK call the use_backend()
function just after your call to library(keras)
. For example:
library(keras)
use_backend("theano")
If you want to use the CNTK backend then you should follow the installation instructions for CNTK and then speicfy “cntk” in your call to use_backend()
:
library(keras)
use_backend("cntk")
Selecting an Implementation
Keras specifies an API that can be implemented by multiple providers. By default, the Keras R package uses the implementation provided by the Keras Python package (“keras”). TensorFlow also provides an integrated implementation of Keras which you can use by specifying “tensorflow” in a call to the use_implementation()
function. For example:
library(keras)
use_implementation("tensorflow")
You would typically specify the “tensorflow” implementation when using Keras with the tfestimators package, as this implementation allows you to use Keras models seamlessly as TensorFlow Estimators.
Keras Configuration File
If you have run Keras at least once, you will find the Keras configuration file at:
~/.keras/keras.json
If it isn’t there, you can create it.
The default configuration file looks like this:
{
"image_data_format": "channels_last",
"epsilon": 1e07,
"floatx": "float32",
"backend": "tensorflow"
}
You can change these settings by editing $HOME/.keras/keras.json
.

image_data_format
: String, either"channels_last"
or"channels_first"
. It specifies which data format convention Keras will follow. (backend()$image_data_format()
returns it.) For 2D data (e.g. image),
"channels_last"
assumes(rows, cols, channels)
while"channels_first"
assumes(channels, rows, cols)
.  For 3D data,
"channels_last"
assumes(conv_dim1, conv_dim2, conv_dim3, channels)
while"channels_first"
assumes(channels, conv_dim1, conv_dim2, conv_dim3)
.
 For 2D data (e.g. image),

epsilon
: Float, a numeric fuzzing constant used to avoid dividing by zero in some operations. 
floatx
: String,"float16"
,"float32"
, or"float64"
. Default float precision. 
backend
: String,"tensorflow"
,"theano"
, or"cntk"
.
Using the Backend
If you want the Keras modules you write to be compatible with all available backends, you have to write them via the abstract Keras backend API. Backend API functions have a k_
prefix (e.g. k_placeholder
, k_constant
, k_dot
, etc.).
For example, the code below instantiates an input placeholder. It’s equivalent to tf$placeholder()
:
library(keras)
inputs < k_placeholder(shape = c(2, 4, 5))
# also works:
inputs < k_placeholder(shape = list(NULL, 4, 5))
# also works:
inputs < k_placeholder(ndim = 3)
The code below instantiates a variable. It’s equivalent to tf$Variable()
:
val < array(runif(60), dim = c(3L, 4L, 5L))
var < k_variable(value = val)
# allzeros variable:
var < k_zeros(shape = c(3, 4, 5))
# allones:
var < k_ones(shape = c(3, 4, 5))
Backend Functions
Elementwise absolute value. 

Bitwise reduction (logical AND). 

Bitwise reduction (logical OR). 

Creates a 1D tensor containing a sequence of integers. 

Returns the index of the maximum value along an axis. 

Returns the index of the minimum value along an axis. 

Active Keras backend 

Batchwise dot product. 

Turn a nD tensor into a 2D tensor with same 1st dimension. 

Returns the value of more than one tensor variable. 

Applies batch normalization on x given mean, var, beta and gamma. 

Sets the values of many tensor variables at once. 

Adds a bias vector to a tensor. 

Binary crossentropy between an output tensor and a target tensor. 

Cast an array to the default Keras float type. 

Casts a tensor to a different dtype and returns it. 

Categorical crossentropy between an output tensor and a target tensor. 

Destroys the current TF graph and creates a new one. 

Elementwise value clipping. 

Concatenates a list of tensors alongside the specified axis. 

Creates a constant tensor. 

1D convolution. 

2D deconvolution (i.e. transposed convolution). 

2D convolution. 

3D deconvolution (i.e. transposed convolution). 

3D convolution. 

Computes cos of x elementwise. 

Returns the static number of elements in a Keras variable or tensor. 

Runs CTC loss algorithm on each batch element. 

Decodes the output of a softmax. 

Converts CTC labels from dense to sparse. 

Cumulative product of the values in a tensor, alongside the specified axis. 

Cumulative sum of the values in a tensor, alongside the specified axis. 

2D convolution with separable filters. 

Multiplies 2 tensors (and/or variables) and returns a tensor. 

Sets entries in 

Returns the dtype of a Keras tensor or variable, as a string. 

Exponential linear unit. 

Fuzz factor used in numeric expressions. 

Elementwise equality between two tensors. 

Evaluates the value of a variable. 

Elementwise exponential. 

Adds a 1sized dimension at index “axis”. 

Instantiate an identity matrix and returns it. 

Flatten a tensor. 

Default float type 

Reduce elems using fn to combine them from left to right. 

Reduce elems using fn to combine them from right to left. 

Instantiates a Keras function 

Retrieves the elements of indices 

TF session to be used by the backend. 

Get the uid for the default graph. 

Returns the value of a variable. 

Returns the shape of a variable. 

Returns the gradients of 

Elementwise truth value of (x >= y). 

Elementwise truth value of (x > y). 

Segmentwise linear approximation of sigmoid. 

Returns a tensor with the same content as the input tensor. 

Default image data format convention (‘channels_first’ or ‘channels_last’). 

Selects 

Returns whether the 

Selects 

Returns the shape of tensor or variable as a list of int or NULL entries. 

Returns whether 

Returns whether 

Returns whether 

Returns whether a tensor is a sparse tensor. 

Normalizes a tensor wrt the L2 norm alongside the specified axis. 

Returns the learning phase flag. 

Elementwise truth value of (x <= y). 

Elementwise truth value of (x < y). 

Apply 1D conv with unshared weights. 

Apply 2D conv with unshared weights. 

Elementwise log. 

Computes log(sum(exp(elements across dimensions of a tensor))). 

Sets the manual variable initialization flag. 

Map the function fn over the elements elems and return the outputs. 

Maximum value in a tensor. 

Elementwise maximum of two tensors. 

Mean of a tensor, alongside the specified axis. 

Minimum value in a tensor. 

Elementwise minimum of two tensors. 

Compute the moving average of a variable. 

Returns the number of axes in a tensor, as an integer. 

Computes mean and std for batch then apply batch_normalization on batch. 

Elementwise inequality between two tensors. 

Computes the onehot representation of an integer tensor. 

Instantiates an allones variable of the same shape as another tensor. 

Instantiates an allones tensor variable and returns it. 

Permutes axes in a tensor. 

Instantiates a placeholder tensor and returns it. 

2D Pooling. 

3D Pooling. 

Elementwise exponentiation. 

Prints 

Multiplies the values in a tensor, alongside the specified axis. 

Returns a tensor with random binomial distribution of values. 

Instantiates a variable with values drawn from a normal distribution. 

Returns a tensor with normal distribution of values. 

Instantiates a variable with values drawn from a uniform distribution. 

Returns a tensor with uniform distribution of values. 

Rectified linear unit. 

Repeats the elements of a tensor along an axis. 

Repeats a 2D tensor. 

Reset graph identifiers. 

Reshapes a tensor to the specified shape. 

Resizes the images contained in a 4D tensor. 

Resizes the volume contained in a 5D tensor. 

Reverse a tensor along the specified axes. 

Iterates over the time dimension of a tensor 

Elementwise rounding to the closest integer. 

2D convolution with separable filters. 

Sets the learning phase to a fixed value. 

Sets the value of a variable, from an R array. 

Returns the symbolic shape of a tensor or variable. 

Elementwise sigmoid. 

Elementwise sign. 

Computes sin of x elementwise. 

Softmax of a tensor. 

Softplus of a tensor. 

Softsign of a tensor. 

Categorical crossentropy with integer targets. 

Pads the 2nd and 3rd dimensions of a 4D tensor. 

Pads 5D tensor with zeros along the depth, height, width dimensions. 

Elementwise square root. 

Elementwise square. 

Removes a 1dimension from the tensor at index “axis”. 

Stacks a list of rank 

Standard deviation of a tensor, alongside the specified axis. 

Returns 

Sum of the values in a tensor, alongside the specified axis. 

Switches between two operations depending on a scalar value. 

Elementwise tanh. 

Pads the middle dimension of a 3D tensor. 

Creates a tensor by tiling 

Converts a sparse tensor into a dense tensor and returns it. 

Transposes a tensor and returns it. 

Returns a tensor with truncated random normal distribution of values. 

Update the value of 

Update the value of 

Update the value of 

Variance of a tensor, alongside the specified axis. 

Instantiates a variable and returns it. 

Instantiates an allzeros variable of the same shape as another tensor. 

Instantiates an allzeros variable and returns it. 