The R interface to TensorFlow encompasses several packages each of which provide a different interface to the core TensorFlow engine. Several tools are available which can be used with any of these interfaces:

It’s highly recommended, although not strictly necessary, that you run deep-learning code on a modern NVIDIA GPU. Some applications – in particular, image processing with convolutional networks and sequence processing with recurrent neural networks – will be excruciatingly slow on CPU, even a fast multicore CPU. This section describes the various options for using GPUs.
The cloudml package provides an R interface to Google Cloud Machine Learning Engine, a managed service that provides on-demand access to training on GPUs, hyperparameter tuning to optmize key attributes of model architectures, and deployment of trained models to the Google global prediction platform that can support thousands of users and TBs of data.
Training Flags
Tuning a model often requires exploring the impact of changes to many hyperparameters. The best way to approach this is generally not to progressively change your source code, but rather to define external flags for key parameters which you may want to vary. The flags() function provides a flexible mechanism for defining flags and varying them across training runs.
Training Runs
The tfruns package provides a suite of tools for tracking and managing TensorFlow training runs and experiments from R. Track the hyperparameters, metrics, output, and source code of every training run, visualize the results of individual runs and comparisons between runs.
The computations you’ll use TensorFlow for - like training a massive deep neural network - can be complex and confusing. To make it easier to understand, debug, and optimize TensorFlow programs, a suite of visualization tools called TensorBoard is available. You can use TensorBoard to visualize your TensorFlow graph, plot quantitative metrics about the execution of your graph, and show additional data like images that pass through it.
Datasets API
The TensorFlow Dataset API provides various facilities for creating scalable input pipelines for TensorFlow. Input from text-delimited, fixed-length, and TFRecord files is supported. Reading and transforming data are TensorFlow graph operations, so are executed in C++ and in parallel with model training.
While TensorFlow models are typically defined and trained using R or Python code, it is possible to deploy TensorFlow models in a wide variety of environments without any runtime dependency on R or Python. The tfdeploy package includes a variety of tools designed to make exporting and serving TensorFlow models straightforward.