This repository has been archived by the owner on Dec 17, 2021. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 860
Achieving 100% accuracy with Hypertune #6
Comments
Thanks for the report. I need a little more info here. Do you have (1) summaries from during the hptuning job (tensorboard screenshots are fine but you can also add me as a reader to your GCS bucket two and I'll load them up. @google.com ), and (2) your job config? |
@Threynaud Can you please try out Census sample and see if you are running into same issues. |
Closing this as inactive |
nat-henderson
pushed a commit
to nat-henderson/cloudml-demo
that referenced
this issue
Mar 20, 2018
…-readme Fix gsutil acl - needs the actual access
nat-henderson
pushed a commit
to nat-henderson/cloudml-demo
that referenced
this issue
Mar 20, 2018
…ense Remove extra XML stuff from POM.
davidcavazos
pushed a commit
to davidcavazos/cloudml-samples
that referenced
this issue
May 1, 2018
The URL was being constructed with a *storage.BucketHandle instead of the bucket's name. Fixes GoogleCloudPlatform#6. Change-Id: I3e2710c2a33a6aadb2920d5f0e1a0c2bce25cd6d
gogasca
pushed a commit
that referenced
this issue
Apr 22, 2019
This is the combined PR for PR3 and PR5
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I started using Hypertune of a dataset of mine using almost the same code than the one in the samples from the documentation.
The distributed version works very well for me but with Hypertune I manage to get an objective value (set to accuracy, like the MNIST sample) of 1.0 = 100% which is quite surprising. Note that I use the same config file that in the example and I get such a high accuracy for high learning rates, close to 0.5.
I thought that the error was on my side but it turns out that there is the same problem with the MNIST example. In the docs, here, 100% accuracy is also achieved with a very simple network which is very unlikely to happen.
Also, it might not be related at all but I noticed a big discrepancy between metrics on the training and eval sets. You can find the corresponding stack overflow question here.
Thanks
EDIT: The stack overflow question is actually not related, I think.
The text was updated successfully, but these errors were encountered: