-
Notifications
You must be signed in to change notification settings - Fork 45.4k
Evaluate multiple checkpoints. #2643
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Can one of the admins verify this patch? |
unnecessary variable assignment in loop
* Adding logging utils * restore utils * delete old file * update inputs and docstrings * make /official a python module * remove /utils directory * Update readme for python path setting * Change readme texts
adding deeplab in readme.md file under models/research
Some python files under models/research, it is wrong to use "Rather then" instead of "Rather than" in comments.
We can directly get `batch_size` by `images.get_shape()[0]` in `inference` method, since maybe we will not use `cifar.inputs` method to build the input.
Fix comment typos under models/research
replace `FLAGS.batch_size` by `images.get_shape()[0]`
…flow#3576) * Remove cd and checkout * Updating comment
* Adding logging utils * restore utils * delete old file * update inputs and docstrings * Update import and fix typos * Fix formatting and comments * Update tests
* Internal change.
PiperOrigin-RevId: 187042423
* Internal change.
PiperOrigin-RevId: 187072380
* Opensource float and eight-bit fixed-point mobilenet_v1 training and eval scripts.
PiperOrigin-RevId: 187106140
* Initial check-in for Mobilenet V2
PiperOrigin-RevId: 187213595
* Allow configuring batch normalization decay and epsilon in MobileNet v1
PiperOrigin-RevId: 187425294
* Allow overriding NASNet model HParams.
This is a change to the API that will allow users to pass in their own configs
to the building functions, which should make these APIs much more customizable
for end-user cases.
This change removes the use_aux_head argument from the model construction
functions, which is no longer necessary given that the use_aux_head option is
configurable in the model config. For example, for the mobile ImageNet model,
the auxiliary head can be disabled using:
config = nasnet.mobile_imagenet_config()
config.set_hparam('use_aux_head', 0)
logits, endpoints = nasnet.build_nasnet_mobile(
inputs, num_classes, config=config)
PiperOrigin-RevId: 188617685
* Automated g4 rollback of changelist 188617685
PiperOrigin-RevId: 188619139
* Removes spurious comment
The current schema contains the entity information about model and train data metadata, as well as machine config. Future change will contain benchmark metric. The json schema can be used to create bigquery table. A sample table can be found in https://bigquery.cloud.google.com/table/tf-benchmark-dashboard:test_benchmark.benchmark_run.
Create groups of arg parsers and convert the official resnet model to the new arg parsers.
restore --version flag in resnet parser
1. Added Tensorflow version information. 2. Added environment variables. 3. Fix typo for hyperparameters. 4. Added cloud related information.
* Removing dependency on tf.test * New lines
Add data schema for the benchmark run in Bigquery.
|
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed (or fixed any issues), please reply here (e.g. What to do if you already signed the CLAIndividual signers
Corporate signers
|
|
Hi, thanks for taking the time to make a PR. But we can't submit this like this. It looks like you replayed master on top of your PR, instead of the other way around (?). Probably the easiest thing to do now to get a submittable PR is to copy your branch ( |
|
Sorry, this was unintended. |
It may occasionally not be possible to evaluate the various models from start (first checkpoint) to finish (last checkpoint) while simultaneously training the model (limited resources). I've added a new flag, 'eval_from_ckpt', which when set will evaluate all checkpoints from and including the specified one.
Example usage:
python C:/.../object_detection/eval.py \ --logtostderr \ --pipeline_config_path=C:/.../models/name.config \ --eval_dir=C:/.../testing \ --checkpoint_dir=C:/.../training \ --eval_from_ckpt=840In this case, all checkpoints from checkpoint 840 to 'latest_checkpoint' will be evaluated.