On-demand access to training on GPUs, including the new Tesla P100 GPUs from NVIDIA®.
Hyperparameter tuning to optmize key attributes of model architectures in order to maximize predictive accuracy.
Deployment of trained models to the Google global prediction platform that can support thousands of users and TBs of data.
Training with CloudML
Once you’ve configured your system to publish to CloudML, training a model is as straightforward as calling the
To train using a Tesla P100 GPU you would specify
When training completes the job is collected and a training run report is displayed:
Check out the cloudml package documentation to get started with training and deploying models on CloudML.
You can also find out more about the various capabilities of CloudML in these articles:
Training with CloudML goes into additional depth on managing training jobs and their output.
Hyperparameter Tuning explores how you can improve the performance of your models by running many trials with distinct hyperparameters (e.g. number and size of layers) to determine their optimal values.
Google Cloud Storage provides information on copying data between your local machine and Google Storage and also describes how to use data within Google Storage during training.
Deploying Models describes how to deploy trained models and generate predictions from them.