From e4fa70d844f04d6d57af2aa06235e83e684bbe0a Mon Sep 17 00:00:00 2001 From: superjom Date: Wed, 21 Mar 2018 12:20:41 +0800 Subject: [PATCH] add evaluation records --- doc/howto.md | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/doc/howto.md b/doc/howto.md index 934aaaf..70734cb 100644 --- a/doc/howto.md +++ b/doc/howto.md @@ -1,21 +1,21 @@ -# How Tos +# How-Tos ## Concepts - `baseline`, one of the best KPI records in the history, the baseline should keep being updated. -- `Factor`, alias for KPI to make code more readable. +- `Factor`, an alias for KPI to make the code more readable. ## Philosophy -`MacroCE` is a highly customizable framework which +`MacroCE` is a highly customizable framework which - triggers the execution of every evaluation and compare the KPIs with baseline, - helps to maintain the latest baseline, - displays the evaluation status or human-readable reports with a web page, and give an alert by sending emails if the evaluation of some version fails. - + It doesn't -- care about the details about the evalution, in other words, one can embed any logics into a evaluation. +- care about the details about the evaluation, in other words, one can embed any logics into an evaluation. ## Adding new KPIs @@ -28,17 +28,17 @@ It doesn't The baseline repo is at [baseline repo](https://github.com/Superjomn/paddle-modelci-baseline). -Let's take the existing evaluation [resnet30](https://github.com/Superjomn/paddle-modelci-baseline/tree/master/resnet30) +Let's take the existing evaluation [resnet30](https://github.com/Superjomn/paddle-modelci-baseline/tree/master/resnet30) for example, the files required by `MacroCE` are - `train.xsh`, it defines how to run this evaluation and generate the KPI records, - `continuous_evaluation.py`, it tells `MacroCE` the KPI Factors this evaluation uses. - `history/`, this is a directory and includes the baseline records. -The `train.xsh` is just a script and nothing to do with MacroCE, so one can embed any logics in it, +The `train.xsh` is just a script and nothing to do with MacroCE so that one can embed any logics in it, for example, any logics including data preparation, or even request a remote service. -The `continuous_evaluation.py` of `resnet30` looks like this +The `continuous_evaluation.py` of `resnet30` looks like this. ```python import os @@ -56,13 +56,13 @@ tracking_factors = [ ``` it creates two instances of KPI Factors `train_cost_factor` and `train_duration_factor`, -both are defined in `MacroCE/core.py`. +both are defined in `MacroCE/core.py`. The list `tracking_factors` is required by `MacroCE` which tells the KPIs this evaluation want to evaluate. Into the details of `train.xsh`, the `resnet30/model.py` defines some details of running the model (MacroCE do not care those logics, but the `train.xsh` need to add records to the KPI Factors in some way). -Some code snippets related to adding KPI records are as follows +Some code snippets related to adding KPI records are as follows. ```python # import KPI Factor instances @@ -70,16 +70,16 @@ from continuous_evaluation import (train_cost_factor, train_duration_factor, tracking_factors) # ... - + for batch_id, data in enumerate(train_reader()): # ... train the model # add factor record train_cost_factor.add_record(np.array(avg_cost_, dtype='float32')) train_duration_factor.add_record(batch_end - batch_start) - -# when the execution ends, persist all the factors to to files. + +# when the execution ends, persist all the factors to files. for factor in tracking_factors: factor.persist() ```