From 01cb5b1195b8999a70efcea9d2266445809b7e5c Mon Sep 17 00:00:00 2001 From: dpressel Date: Thu, 29 Nov 2018 22:50:23 -0500 Subject: [PATCH] update EP --- docs/v1.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/v1.md b/docs/v1.md index 7f16ca7f6..16d514843 100644 --- a/docs/v1.md +++ b/docs/v1.md @@ -36,7 +36,7 @@ The underlying changes have simplified mead considerably, making it easier to de - The `Examples` and `DataFeed` objects are largely reused between tasks except when this is not possible - The batching operation on the examples is now completely generalized whih makes adding custom features simple - **Easier Extension Points**: We have removed the complexity of `addon` registration, preferring instead simple decorators to the previous method of convention-based plugins. Documentation can be found [here](https://github.com/dpressel/baseline/blob/feature/v1/docs/addons.md) - - **Training Simplifications**: A design goal was that a user should easily be able to train a model without using `mead`. It should be easier use the Baseline API to [train directly](https://github.com/dpressel/baseline/blob/feature/v1/api-examples/tf-train-classifier-from-scratch.py) or to [use external software to train a Baseline model](https://github.com/dpressel/baseline/blob/feature/v1/api-examples/tf-estimator.py) + - **Training Simplifications**: A design goal was that a user should easily be able to train a model without using `mead`. It should be easier use the Baseline API to [train directly](https://github.com/dpressel/baseline/blob/feature/v1/api-examples/tf-train-from-scratch.py) or to [use external software to train a Baseline model](https://github.com/dpressel/baseline/blob/feature/v1/api-examples/tf-estimator.py) - Multi-GPU support is consistent, defaults to all `CUDA_VISIBLE_DEVICES` - **More Documentation**: There is more code documentation, as well as API examples that show how to use the **Baseline** API directly. These are also used to self-verify that the API is as simple to use as possible. There is forthcoming documentation on the way that `addons` work under the hood, as this has been a point of confusion for some users - **Standardized Abstractions**: We have attempted to unify a set of patterns for each model/task and to try and ensure that the routines making up execution share a common naming convention and flow across each framework