Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
General stateful metrics fixes #9446
This implements a variety of stateful metrics improvements I noted in #9253 .
…ack-embeddings-from-layer-outputs * upstream/master: (68 commits) fit/evaluate_generator supporting native tensors (keras-team#9816) keras-team#9642 Add kwarg and documentation for dilation_rate to SeparableConvs (keras-team#9844) Document that "same" is inconsistent across backends with strides!=1 (keras-team#9629) Improve tests by designating dtype of sample data (keras-team#9834) Add documentation for 'subset' and interpolation' arguments (ImageDataGenerator) (keras-team#9817) Revert default theme to readthedocs Various docs fixes. Fix conflict Add support for class methods documentation (keras-team#9751) Add missing verbose opt for evaluate_generator (keras-team#9811) Added `data_format` to flatten layer. (keras-team#9696) Allow saving models directly to binary stream (keras-team#9789) Fix ctc_batch_cost() error when batch_size = 1 (keras-team#9775) Fix keras-team#9802 (keras-team#9803) Fix error in ImageDataGenerator documentation (keras-team#9798) fix typo (keras-team#9792) keras-team#9733: Extend RemoteMonitor to send data as application/json (keras-team#9734) Fixed inconsistencies regarding ReduceLROnPlateau (keras-team#9723) Fix doc issue. General stateful metrics fixes (keras-team#9446) ...
* Require stateful metrics layers to be actually stateful * Prevent stateful metrics to leak np.floats to the History object * Progbar: Format stateful metrics values as floats alike other metrics * test_stateful_metrics: Also test validation set evaluation This makes sure the metric is reset before evaluating valset. * Add support for stateful metrics in fit_generator() and evaluate_generator() * Document stateful metrics It would be even better to have full-fledged stateful layers documentations, but I lack the knowledge and experience to explain that well. * evaluate_generator(): Do not leak np.float to History here either * Revert stateful metrics documentation until the API stabilizes * Progbar: Explain stateful metrics handling * Model.evaluate_generator(): More consistent stateful metrics handling Use metrics_names, rather than metrics + index juggling to skip loss. Make loss-only output handling consistent with other Model methods. all_outs -> outs_per_batch to avoid confusion, all_outs has swapped dimensions in predict_generator().