Keras 2.2.0

@fchollet fchollet released this Jun 6, 2018 · 21 commits to master since this release

Areas of improvements

  • New model definition API: Model subclassing.
  • New input mode: ability to call models on TensorFlow tensors directly (TensorFlow backend only).
  • Improve feature coverage of Keras with the Theano and CNTK backends.
  • Bug fixes and performance improvements.
  • Large refactors improving code structure, code health, and reducing test time. In particular:
    • The Keras engine now follows a much more modular structure.
    • The Sequential model is now a plain subclass of Model.
    • The modules applications and preprocessing are now externalized to their own repositories (keras-applications and keras-preprocessing).

API changes

  • Add Model subclassing API (details below).
  • Allow symbolic tensors to be fed to models, with TensorFlow backend (details below).
  • Enable CNTK and Theano support for layers SeparableConv1D, SeparableConv2D, as well as backend methods separable_conv1d and separable_conv2d (previously only available for TensorFlow).
  • Enable CNTK and Theano support for applications Xception and MobileNet (previously only available for TensorFlow).
  • Add MobileNetV2 application (available for all backends).
  • Enable loading external (non built-in) backends by changing your ~/.keras.json configuration file (e.g. PlaidML backend).
  • Add sample_weight in ImageDataGenerator.
  • Add preprocessing.image.save_img utility to write images to disk.
  • Default Flatten layer's data_format argument to None (which defaults to global Keras config).
  • Sequential is now a plain subclass of Model. The attribute sequential.model is deprecated.
  • Add baseline argument in EarlyStopping (stop training if a given baseline isn't reached).
  • Add data_format argument to Conv1D.
  • Make the model returned by multi_gpu_model serializable.
  • Support input masking in TimeDistributed layer.
  • Add an advanced_activation layer ReLU, making the ReLU activation easier to configure while retaining easy serialization capabilities.
  • Add axis=-1 argument in backend crossentropy functions specifying the class prediction axis in the input tensor.

New model definition API : Model subclassing

In addition to the Sequential API and the functional Model API, you may now define models by subclassing the Model class and writing your own call forward pass:

import keras

class SimpleMLP(keras.Model):

    def __init__(self, use_bn=False, use_dp=False, num_classes=10):
        super(SimpleMLP, self).__init__(name='mlp')
        self.use_bn = use_bn
        self.use_dp = use_dp
        self.num_classes = num_classes

        self.dense1 = keras.layers.Dense(32, activation='relu')
        self.dense2 = keras.layers.Dense(num_classes, activation='softmax')
        if self.use_dp:
            self.dp = keras.layers.Dropout(0.5)
        if self.use_bn:
   = keras.layers.BatchNormalization(axis=-1)

    def call(self, inputs):
        x = self.dense1(inputs)
        if self.use_dp:
            x = self.dp(x)
        if self.use_bn:
            x =
        return self.dense2(x)

model = SimpleMLP()

Layers are defined in __init__(self, ...), and the forward pass is specified in call(self, inputs). In call, you may specify custom losses by calling self.add_loss(loss_tensor) (like you would in a custom layer).

New input mode: symbolic TensorFlow tensors

With Keras 2.2.0 and TensorFlow 1.8 or higher, you may fit, evaluate and predict using symbolic TensorFlow tensors (that are expected to yield data indefinitely). The API is similar to the one in use in fit_generator and other generator methods:

iterator = training_dataset.make_one_shot_iterator()
x, y = iterator.get_next(), y, steps_per_epoch=100, epochs=10)

iterator = validation_dataset.make_one_shot_iterator()
x, y = iterator.get_next()
model.evaluate(x, y, steps=50)

This is achieved by dynamically rewiring the TensorFlow graph to feed the input tensors to the existing model placeholders. There is no performance loss compared to building your model on top of the input tensors in the first place.

Breaking changes

  • Remove legacy Merge layers and associated functionality (remnant of Keras 0), which were deprecated in May 2016, with full removal initially scheduled for August 2017. Models from the Keras 0 API using these layers cannot be loaded with Keras 2.2.0 and above.
  • The truncated_normal base initializer now returns values that are scaled by ~0.9 (resulting in correct variance value after truncation). This has a small chance of affecting initial convergence behavior on some models.


Thanks to our 46 contributors whose commits are featured in this release:

@ASvyatkovskiy, @AmirAlavi, @Anirudh-Swaminathan, @DavidAriel, @Dref360, @JonathanCMitchell, @KuzMenachem, @PeterChe1990, @Saharkakavand, @StefanoCappellini, @ageron, @askskro, @bileschi, @bonlime, @bottydim, @brge17, @briannemsick, @bzamecnik, @christian-lanius, @clemens-tolboom, @dschwertfeger, @dynamicwebpaige, @farizrahman4u, @fchollet, @fuzzythecat, @ghostplant, @giuscri, @huyu398, @jnphilipp, @masstomato, @morenoh149, @mrTsjolder, @nittanycolonial, @r-kellerm, @reidjohnson, @roatienza, @sbebo, @stevemurr, @taehoonlee, @tiferet, @tkoivisto, @tzerrell, @vkk800, @wangkechn, @wouterdobbels, @zwang36wang