Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Protobuf failure #52

Open
latkins opened this issue Dec 2, 2017 · 10 comments
Open

Protobuf failure #52

latkins opened this issue Dec 2, 2017 · 10 comments

Comments

@latkins
Copy link

latkins commented Dec 2, 2017

I'm occasionally getting an error of the following form:

 [libprotobuf FATAL google/protobuf/wire_format.cc:830] CHECK failed: (output->ByteCount()) == (expected_endpoint): : Protocol message serialized to a size different from what was originally expected.  Perhaps it was modified by another thread during serialization?
terminate called after throwing an instance of 'google::protobuf::FatalException'
  what():  CHECK failed: (output->ByteCount()) == (expected_endpoint): : Protocol message serialized to a size different from what was originally expected.  Perhaps it was modified by another thread during serialization?

Is this some underlying tensorboard issue, or due tensorboard-pytorch?

@lanpa
Copy link
Owner

lanpa commented Dec 4, 2017

I can't determine from that. Which program spit that message? Did that stop your training?

@latkins
Copy link
Author

latkins commented Dec 6, 2017

This occurs when performing a hyperparameter search, where a new writer is created (and closed) for each set of parameters. It doesn't seem occur at any obvious point - e.g. it isn't the second time a writer is created. It does stop training, yes. I can try and make a minimal example if that would help!

@lanpa
Copy link
Owner

lanpa commented Dec 6, 2017

A reproducible code would be great help! thanks
There is a writer.close() method. Did you close the old writer before opening a new one?

@latkins
Copy link
Author

latkins commented Dec 6, 2017

Ok, will write something! Yes, I used writer.close().

@lanpa
Copy link
Owner

lanpa commented Dec 18, 2017

Hi, what is your protobuf version?

@lanpa lanpa closed this as completed Mar 28, 2018
@TengdaHan
Copy link

Same here. Occasionally get the same error. But error disappears if do not use tensorboard

@lanpa lanpa reopened this Apr 5, 2018
@lanpa
Copy link
Owner

lanpa commented Apr 5, 2018

@TengdaHan Can you provide more info?

@jendrikjoe
Copy link

jendrikjoe commented Jul 27, 2018

Hey there @lanpa,

I think I tracked at least one possible root cause down.
I get this exception whenever my events file explodes in size (around 827 MB). At the same time tensorboard itself crashes as well.
For me the origin of this huge size where parameters which I stored as histograms using numpy and writer.add_histogram(name, param.data.cpu().numpy(), epoch, bins="auto") . This seems to cause problems when the distribution is really sharp around 0 (To strong weight decay).
Changing it to bins="doane", solves it for me.
I hope that helps some people to track down their problems as well :)

Cheers,

Jendrik

@lanpa
Copy link
Owner

lanpa commented Jul 30, 2018

@jendrikjoe Thanks for the investigation and nice spot. I am curious if using the default tensorflow binning solves your problem too. If you would like to help to do the test, please test with pip install git+https://github.com/lanpa/tensorboardX. BTW, 827MB of histograms are pretty large, did you log the histogram very often?

@jendrikjoe
Copy link

Hey @lanpa,
sorry for the long silence.
The tensorflow binning seems to solve the issue as well.
827MB are indeed a lot. But it was coming from the binning method. I don't know why though maybe one of the methods called in auto has an issue with binning of lots of numbers close to zero.
Not sure. If using the tensorflow binning the histogramms are around 1 MB :)
Cheers,

Jendrik

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants