In this article, we’ll develop a custom estimator to be used with the Abalone dataset. This dataset provides information on the physical characteristics of a number of abalones (a type of sea snail), and use these characteristics to predict the number of rings in the shell. As described at

Predicting the age of abalone from physical measurements. The age of abalone is determined by cutting the shell through the cone, staining it, and counting the number of rings through a microscope – a boring and time-consuming task. Other measurements, which are easier to obtain, are used to predict the age. Further information, such as weather patterns and location (hence food availability) may be required to solve the problem.

We’ll start by defining a function that will download and save the various abalone datasets we’ll use here. These datasets are hosted freely on the TensorFlow website.

Because the raw datasets are not supplied with column names, we define them explicitly here (in the order they appear in the dataset), and apply them when the datasets are downloaded.

We now have the abalone datasets available locally. Now, we begin by defining an input function for our soon-to-be-defined estimator. Here, we define an input function generator – this function accepts a dataset, and returns an input function that pulls data from the associated dataset. Using this, we can generate input functions for each of our datasets easily.

Note that we are attempting to predict the num_rings variable, and accept all other variables contained within the dataset as potentially associated features.

abalone_input_fn <- function(dataset) {
  input_fn(dataset, features = -num_rings, response = num_rings)

Next, we define our custom model function. Canned estimators provided by TensorFlow / the tfestimators package come with pre-packaged model functions; when you wish to define your own custom estimator, you must provide your own model function. This is the function responsible for constructing the actual neural network to be used in your model, and should be created by composing TensorFlow’s primitives for layers together.

We’ll construct a network with two fully-connected hidden layers, and a final output layer. After you’ve constructed your neywork and defined the optimizer + loss functions you wish to use, you can call the estimator_spec() function to construct your estimator.

The model function should accept the following parameters:

  • features: The feature columns (normally supplied by an input function);

  • labels: The true labels, to be used for computing the loss;

  • mode: A key that specifies whether training, evaluation, or prediction is being performs.

  • params: A set of custom parameters; typically supplied by the user of your custom estimator when instances of this estimator are created. (For example, we’ll see later that the learning_rate is supplied through here.)

  • config: Runtime configuration values; typically unneeded by custom estimators, but can be useful if you need to introspect the state of the associated TensorFlow session.

We’ve defined our model function – we can now use the estimator() function to create an instance of the estimator we’ve defined, using that model function.

model <- estimator(model_fn, params = list(learning_rate = 0.001))

Now, we can train, evaluate, and predict using our estimator.

train(model, input_fn = abalone_input_fn(train_data))
evaluate(model, input_fn = abalone_input_fn(test_data))
predict(model, input_fn = abalone_input_fn(predict_data))