The basic components of the TensorFlow Estimators API include:
Canned estimators (pre-built implementations of various models).
Custom estimators (custom model implementations).
Feature columns (definitions of how features should be transformed during modeling).
Input functions (sources of data for training, evaluation, and prediction).
In addition, there are APIs that cover more advanced usages:
Experiments (wrappers around estimators that handle concerns like distributed training, hyperparameter tuning, etc.)
Run hooks (callbacks for recording context and interacting with each modelling processes)
SavedModel export utilities (exports the trained model to be deployed in places like CloudML)
Please read our white paper if you are interested in the detailed design of the above components.
Below we enumerate some of the core functions in each of these components to provide a high level overview of what’s available. See the linked articles for more details on using all of the components together.
The tfestimators package includes a wide variety of canned estimators for common machine learning tasks. Currently, the following canned estimators are available:
||Linear regressor model.|
||Linear classifier model.|
||DNN Linear Combined Regression.|
||DNN Linear Combined Classification.|
Before you can use an estimator, you need to provide an input function and define a set of feature columns. The following two sections cover how to do this.
Input functions are used to provide data to estimators during training, evaluation and prediction. The R interface provides several high-level input function implementations for various common R data sources, including:
- Data Frames
- Lists of vectors
For example, here’s how we might construct an input function that uses the
mtcars data frame as a data source, treating the
am variables as feature columns, and
vs as a response.
input <- input_fn(mtcars, features = c("drat", "mpg", "am"), response = "vs", batch_size = 128, epochs = 3)
The formula interface is a bit more succinct, in this case, and should feel familiar to R users who have experience fitting models using the R
input <- input_fn(vs ~ drat + mpg + am, data = mtcars, batch_size = 128, epochs = 3)
You can also write fully custom input functions that draw data from arbitrary data sources. See the input functions article for additional details.
In TensorFlow, feature columns are used to specify the ‘shapes’, or ‘types’, of inputs that can be expected by a particular model. For example, in the following code, we define two simple feature columns: a numeric column called
"drat", and a indicator column called
"am" with one-hot representation.
cols <- feature_columns( column_numeric("drat"), column_indicator("am") )
There are a wide variety of feature column functions available:
||Represents multi-hot representation of given categorical column.|
||Represents real valued or numerical features.|
||Creates an dense column that converts from sparse, categorical input.|
||Represents discretized dense input.|
||Applies weight values to a categorical column.|
||Creates a categorical column with in-memory vocabulary.|
||Creates a categorical column with a vocabulary file.|
||Creates a categorical column that returns identity values.|
||Represents sparse feature where ids are set by hashing.|
See the article on feature columns for additional details.
Creating an Estimator
Here’s an example of creating a DNN Linear Combined canned Estimator. In creating the estimator we pass the feature columns and other parameters that specifies the layers and architecture of the model. Note that this particular estimator takes two sets of feature columns – one used for constructing the linear layer, and the other used for the fully connected deep layer.
# construct feature columns linear_feature_columns <- feature_columns(column_numeric("mpg")) dnn_feature_columns <- feature_columns(column_numeric("drat")) # generate classifier classifier <- dnn_linear_combined_classifier( linear_feature_columns = linear_feature_columns, dnn_feature_columns = dnn_feature_columns, dnn_hidden_units = c(3, 3), dnn_optimizer = "Adagrad" )
Training and Prediction
Users can then call
train() to train the initialized Estimator for a number of steps:
classifier %>% train(input_fn = input, steps = 2)
Once a model is trained, users can use
predict() that makes predictions on a given input function that represents the inference data source.
predictions <- predict(classifier, input_fn = input)
Users can also pass a key to
predict_keys argument in
predict() that generates different types of predictions, such as probabilities using
predictions <- predict( classifier, input_fn = input, predict_keys = "probabilities")
predictions <- predict( classifier, input_fn = input, predict_keys = "logistic")
You can find all the available keys by printing
prediction_keys(). However, not all keys can be used by different types of estimators. For example, regressors cannot use
"probabilities" as one of the keys since probability output only makes sense for classification models.
Models created via
tfestimators are persisted on disk. To obtain the location of where the model artifacts are stored, we can call
saved_model_dir <- model_dir(classifier)
And subsequently load the saved model (in a new session) by passing the directory to the
model_dir argument of the model constructor and use it for prediction or continue training:
library(tfestimators) linear_feature_columns <- feature_columns(column_numeric("mpg")) dnn_feature_columns <- feature_columns(column_numeric("drat")) loaded_model <- dnn_linear_combined_classifier( linear_feature_columns = linear_feature_columns, dnn_feature_columns = dnn_feature_columns, dnn_hidden_units = c(3, 3), dnn_optimizer = "Adagrad", model_dir = saved_model_dir ) loaded_model
There are a number of estimator methods which can be used generically with any canned or custom estimator:
||Trains a model given training data input_fn.|
||Returns predictions for given features.|
||Evaluates the model given evaluation data input_fn.|
||Trains and evaluates a model for both local and distributed configurations.|
||Exports inference graph as a SavedModel into a given directory.|