Skip to content

Conversation

@Adrrei
Copy link

@Adrrei Adrrei commented Oct 30, 2017

It may occasionally not be possible to evaluate the various models from start (first checkpoint) to finish (last checkpoint) while simultaneously training the model (limited resources). I've added a new flag, 'eval_from_ckpt', which when set will evaluate all checkpoints from and including the specified one.

Example usage:
python C:/.../object_detection/eval.py \ --logtostderr \ --pipeline_config_path=C:/.../models/name.config \ --eval_dir=C:/.../testing \ --checkpoint_dir=C:/.../training \ --eval_from_ckpt=840

In this case, all checkpoints from checkpoint 840 to 'latest_checkpoint' will be evaluated.

@tensorflow-jenkins
Copy link
Collaborator

Can one of the admins verify this patch?

MarkDaoust and others added 25 commits March 11, 2018 20:39
* Adding logging utils

* restore utils

* delete old file

* update inputs and docstrings

* make /official a python module

* remove /utils directory

* Update readme for python path setting

* Change readme texts
adding deeplab in readme.md file under models/research
Some python files under models/research, it is wrong to use "Rather then" instead of "Rather than" in  comments.
We can directly get `batch_size` by `images.get_shape()[0]` in `inference` method, since maybe we will not use `cifar.inputs` method to build the input.
Fix comment typos under models/research
replace  `FLAGS.batch_size` by `images.get_shape()[0]`
* Adding logging utils

* restore utils

* delete old file

* update inputs and docstrings

* Update import and fix typos

* Fix formatting and comments

* Update tests
* Internal change.

PiperOrigin-RevId: 187042423

* Internal change.

PiperOrigin-RevId: 187072380

* Opensource float and eight-bit fixed-point mobilenet_v1 training and eval scripts.

PiperOrigin-RevId: 187106140

* Initial check-in for Mobilenet V2

PiperOrigin-RevId: 187213595

* Allow configuring batch normalization decay and epsilon in MobileNet v1

PiperOrigin-RevId: 187425294

* Allow overriding NASNet model HParams.

This is a change to the API that will allow users to pass in their own configs
to the building functions, which should make these APIs much more customizable
for end-user cases.

This change removes the use_aux_head argument from the model construction
functions, which is no longer necessary given that the use_aux_head option is
configurable in the model config. For example, for the mobile ImageNet model,
the auxiliary head can be disabled using:

config = nasnet.mobile_imagenet_config()
config.set_hparam('use_aux_head', 0)
logits, endpoints = nasnet.build_nasnet_mobile(
    inputs, num_classes, config=config)
PiperOrigin-RevId: 188617685

* Automated g4 rollback of changelist 188617685

PiperOrigin-RevId: 188619139

* Removes spurious comment
The current schema contains the entity information about model
and train data metadata, as well as machine config. Future change
will contain benchmark metric.

The json schema can be used to create bigquery table. A sample
table can be found in
https://bigquery.cloud.google.com/table/tf-benchmark-dashboard:test_benchmark.benchmark_run.
Create groups of arg parsers and convert the official resnet model to
the new arg parsers.
restore --version flag in resnet parser
1. Added Tensorflow version information.
2. Added environment variables.
3. Fix typo for hyperparameters.
4. Added cloud related information.
* Removing dependency on tf.test

* New lines
Add data schema for the benchmark run in Bigquery.
@googlebot
Copy link

Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please visit https://cla.developers.google.com/ to sign.

Once you've signed (or fixed any issues), please reply here (e.g. I signed it!) and we'll verify it.


What to do if you already signed the CLA

Individual signers
Corporate signers

@MarkDaoust
Copy link
Member

Hi, thanks for taking the time to make a PR.

But we can't submit this like this. It looks like you replayed master on top of your PR, instead of the other way around (?).

Probably the easiest thing to do now to get a submittable PR is to copy your branch (git checkout -b rebase-test), and git rebase master to replay your changes on master, and make a new PR from that branch.

@MarkDaoust MarkDaoust closed this Dec 16, 2018
@Adrrei
Copy link
Author

Adrrei commented Dec 16, 2018

Sorry, this was unintended.
Please disregard the latest pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.