There are multiple ways to deploy TensorFlow models. In this section we will describe some of the most used ways of deploying those models.
Plumber API: Create a REST API using Plumber to deploy your TensorFlow model. With Plumber you will still depend on having an R runtime which be useful when you want to make the data pre-processing in R.
Shiny: Create a Shiny app that uses a TensorFlow model to generate outputs.
TensorFlow Serving: This is the most performant way of deploying TensorFlow models since it’s based only inn the TensorFlow serving C++ server. With TF serving you don’t depend on an R runtime, so all pre-processing must be done in the TensorFlow graph.
RStudio Connect: RStudio Connect makes it easy to deploy TensorFlow models and uses TensorFlow serving in the backend.
There are many other options to deploy TensorFlow models built with R that are not covered in this section. For example:
- Deploy it using a Python runtime.
- Deploy to a mobile phone app using TensorFlow Lite.
- Deploy to a iOS app using Apple’s Core ML tool.
- Use plumber and Docker to deploy your TensorFlow model (by T-Mobile).