Permalink
..
Failed to load latest commit information.
dataframe Deprecate tf.neg, tf.mul, tf.sub (and remove math_ops.{neg,mul,sub} u… Jan 11, 2017
datasets Add TensorFlow __init__.py back to py_binary targets Jan 7, 2017
estimators export_savedmodel is no longer considered @experimental. Jan 19, 2017
learn_io Fix a breakage in python3 test Jan 15, 2017
ops Merge changes from github. Jan 13, 2017
preprocessing Remove so many more hourglass imports Dec 30, 2016
tests Remove so many more hourglass imports Dec 30, 2016
utils Expose export-related utility functions under tf.contrib.learn Jan 19, 2017
README.md Merge changes from github. Dec 22, 2016
__init__.py Expose export-related utility functions under tf.contrib.learn Jan 19, 2017
basic_session_run_hooks.py Move MonitoredSession and related utilities from tf.contrib.learn to … Oct 3, 2016
evaluable.py Switch tf-learn BaseEstimator.evaluate() to using evaluation.evaluate… Jan 4, 2017
experiment.py Make train_monitors property getter returns shallow copy of the inter… Jan 19, 2017
experiment_test.py Make train_monitors property getter returns shallow copy of the inter… Jan 19, 2017
export_strategy.py Automated rollback of change 144897533 Jan 19, 2017
graph_actions.py Add tf.tables_initializer as a replacement for tf.initialize_all_tabl… Jan 15, 2017
graph_actions_test.py Remove so many more hourglass imports Dec 30, 2016
grid_search_test.py Remove so many more hourglass imports Dec 30, 2016
learn_runner.py Fixed string formatting and valid task listing. Jan 17, 2017
learn_runner_test.py Fixed string formatting and valid task listing. Jan 17, 2017
metric_spec.py Add multi-class metrics. Dec 6, 2016
metric_spec_test.py Remove so many more hourglass imports Dec 30, 2016
models.py Remove tf.concat(concat_dim, values, name). I will follow up with ano… Jan 9, 2017
monitored_session.py Move MonitoredSession and related utilities from tf.contrib.learn to … Oct 3, 2016
monitors.py Actually use eval hooks in Experiment across all usages. Jan 18, 2017
monitors_test.py Remove so many more hourglass imports Dec 30, 2016
session_run_hook.py Removed tf.learn.SessionRunHook in favor of tf.train.SessionRunHook. Oct 6, 2016
summary_writer_cache.py Move MonitoredSession and related utilities from tf.contrib.learn to … Oct 3, 2016
trainable.py Merge changes from github. Dec 9, 2016

README.md

TF Learn

TF Learn is a simplified interface for TensorFlow, to get people started on predictive analytics and data mining. The library covers a variety of needs: from linear models to Deep Learning applications like text and image understanding.

Why TensorFlow?

  • TensorFlow provides a good backbone for building different shapes of machine learning applications.
  • It will continue to evolve both in the distributed direction and as general pipelinining machinery.

Why TensorFlow Learn?

  • To smooth the transition from the scikit-learn world of one-liner machine learning into the more open world of building different shapes of ML models. You can start by using fit/predict and slide into TensorFlow APIs as you are getting comfortable.
  • To provide a set of reference models that will be easy to integrate with existing code.

Installation

Install TensorFlow, and then simply import learn via from tensorflow.contrib.learn or use tf.contrib.learn.

Optionally you can install scikit-learn and pandas for additional functionality.

Tutorials

Community

Usage

Below are a few simple examples of the API. For more examples, please see examples.

General tips:

  • It's useful to rescale a dataset to 0 mean and unit standard deviation before passing it to an Estimator. Stochastic Gradient Descent doesn't always do the right thing when variable are at very different scales.

  • Categorical variables should be managed before passing input to the estimator.

Linear Classifier

Simple linear classification:

import tensorflow.contrib.learn.python.learn as learn
from sklearn import datasets, metrics

iris = datasets.load_iris()
feature_columns = learn.infer_real_valued_columns_from_input(iris.data)
classifier = learn.LinearClassifier(n_classes=3, feature_columns=feature_columns)
classifier.fit(iris.data, iris.target, steps=200, batch_size=32)
iris_predictions = list(classifier.predict(iris.data, as_iterable=True))
score = metrics.accuracy_score(iris.target, iris_predictions)
print("Accuracy: %f" % score)

Linear Regressor

Simple linear regression:

import tensorflow.contrib.learn.python.learn as learn
from sklearn import datasets, metrics, preprocessing

boston = datasets.load_boston()
x = preprocessing.StandardScaler().fit_transform(boston.data)
feature_columns = learn.infer_real_valued_columns_from_input(x)
regressor = learn.LinearRegressor(feature_columns=feature_columns)
regressor.fit(x, boston.target, steps=200, batch_size=32)
boston_predictions = list(regressor.predict(x, as_iterable=True))
score = metrics.mean_squared_error(boston_predictions, boston.target)
print ("MSE: %f" % score)

Deep Neural Network

Example of 3 layer network with 10, 20 and 10 hidden units respectively:

import tensorflow.contrib.learn.python.learn as learn
from sklearn import datasets, metrics

iris = datasets.load_iris()
feature_columns = learn.infer_real_valued_columns_from_input(iris.data)
classifier = learn.DNNClassifier(hidden_units=[10, 20, 10], n_classes=3, feature_columns=feature_columns)
classifier.fit(iris.data, iris.target, steps=200, batch_size=32)
iris_predictions = list(classifier.predict(iris.data, as_iterable=True))
score = metrics.accuracy_score(iris.target, iris_predictions)
print("Accuracy: %f" % score)

Custom model

Example of how to pass a custom model to the Estimator:

from sklearn import datasets
from sklearn import metrics
import tensorflow as tf
import tensorflow.contrib.layers.python.layers as layers
import tensorflow.contrib.learn.python.learn as learn

iris = datasets.load_iris()

def my_model(features, labels):
  """DNN with three hidden layers."""
  # Convert the labels to a one-hot tensor of shape (length of features, 3) and
  # with a on-value of 1 for each one-hot vector of length 3.
  labels = tf.one_hot(labels, 3, 1, 0)

  # Create three fully connected layers respectively of size 10, 20, and 10.
  features = layers.stack(features, layers.fully_connected, [10, 20, 10])

  # Create two tensors respectively for prediction and loss.
  prediction, loss = (
      tf.contrib.learn.models.logistic_regression(features, labels)
  )

  # Create a tensor for training op.
  train_op = tf.contrib.layers.optimize_loss(
      loss, tf.contrib.framework.get_global_step(), optimizer='Adagrad',
      learning_rate=0.1)

  return {'class': tf.argmax(prediction, 1), 'prob': prediction}, loss, train_op

classifier = learn.Estimator(model_fn=my_model)
classifier.fit(iris.data, iris.target, steps=1000)

y_predicted = [
  p['class'] for p in classifier.predict(iris.data, as_iterable=True)]
score = metrics.accuracy_score(iris.target, y_predicted)
print('Accuracy: {0:f}'.format(score))

Saving / Restoring models

Each estimator supports a model_dir argument, which takes a folder path where all model information will be saved:

classifier = learn.DNNClassifier(..., model_dir="/tmp/my_model")

If you run multiple fit operations on the same Estimator, training will resume where the last operation left off, e.g.:

classifier = learn.DNNClassifier(..., model_dir="/tmp/my_model")
classifier.fit(..., steps=300)
INFO:tensorflow:Create CheckpointSaverHook
INFO:tensorflow:loss = 2.40115, step = 1
INFO:tensorflow:Saving checkpoints for 1 into /tmp/leftoff/model.ckpt.
INFO:tensorflow:loss = 0.338706, step = 101
INFO:tensorflow:loss = 0.159414, step = 201
INFO:tensorflow:Saving checkpoints for 300 into /tmp/leftoff/model.ckpt.
INFO:tensorflow:Loss for final step: 0.0953846.

classifier.fit(..., steps=300)
INFO:tensorflow:Create CheckpointSaverHook
INFO:tensorflow:loss = 0.113173, step = 301
INFO:tensorflow:Saving checkpoints for 301 into /tmp/leftoff/model.ckpt.
INFO:tensorflow:loss = 0.175782, step = 401
INFO:tensorflow:loss = 0.119735, step = 501
INFO:tensorflow:Saving checkpoints for 600 into /tmp/leftoff/model.ckpt.
INFO:tensorflow:Loss for final step: 0.0518137.

To restore checkpoints to a new Estimator, just pass it the same model_dir argument, e.g.:

classifier = learn.DNNClassifier(..., model_dir="/tmp/my_model")
classifier.fit(..., steps=300)
INFO:tensorflow:Create CheckpointSaverHook
INFO:tensorflow:loss = 1.16335, step = 1
INFO:tensorflow:Saving checkpoints for 1 into /tmp/leftoff/model.ckpt.
INFO:tensorflow:loss = 0.176995, step = 101
INFO:tensorflow:loss = 0.184573, step = 201
INFO:tensorflow:Saving checkpoints for 300 into /tmp/leftoff/model.ckpt.
INFO:tensorflow:Loss for final step: 0.0512496.

classifier2 = learn.DNNClassifier(..., model_dir="/tmp/my_model")
classifier2.fit(..., steps=300)
INFO:tensorflow:Create CheckpointSaverHook
INFO:tensorflow:loss = 0.0543797, step = 301
INFO:tensorflow:Saving checkpoints for 301 into /tmp/leftoff/model.ckpt.
INFO:tensorflow:loss = 0.101036, step = 401
INFO:tensorflow:loss = 0.137956, step = 501
INFO:tensorflow:Saving checkpoints for 600 into /tmp/leftoff/model.ckpt.
INFO:tensorflow:Loss for final step: 0.0162506.

Summaries

If you supply a model_dir argument to your Estimators, TensorFlow will write summaries for loss and histograms for variables in this directory. (You can also add custom summaries in your custom model function by calling Summary operations.)

To view the summaries in TensorBoard, run the following command, where logdir is the model_dir for your Estimator:

tensorboard --logdir=/tmp/tf_examples/my_model_1

and then load the reported URL.

Graph visualization

Text classification RNN Graph

Loss visualization

Text classification RNN Loss

More examples

See the examples folder for: