Overview

It’s highly recommended, although not strictly necessary, that you run deep-learning code on a modern NVIDIA GPU. Some applications – in particular, image processing with convolutional networks and sequence processing with recurrent neural networks – will be excruciatingly slow on CPU, even a fast multicore CPU. And even for applications that can realistically be run on CPU, you’ll generally see speed increase by a factor or 5 or 10 by using a modern GPU.

If your local workstation doesn’t already have a GPU that you can use for deep learning (a recent, high-end NVIDIA GPU), then running deep learning experiments in the cloud is a simple, low-cost way for you to get started without having to buy any additional hardware. See the documentation below for details on using both local and cloud GPUs.

Local GPU
For systems that have a recent, high-end NVIDIA® GPU, TensorFlow is available in a GPU version that takes advantage of the CUDA and cuDNN libraries to accelerate training performance. Similarly, Arm Macs can take advantage of the GPU.
CloudML
Google CloudML is a managed service that provides on-demand access to training on GPUs, including the new Tesla P100 GPUs from NVIDIA. CloudML also provides hyperparameter tuning to optmize key attributes of model architectures in order to maximize predictive accuracy.
Cloud Server
Cloud server instances with GPUs are available from services like Amazon EC2 and Google Compute Engine. You can use RStudio Server on these instances, making the development experience nearly identical to working locally.
                                                                                                                                                  |