Skip to content

Conversation

mattdangerw
Copy link
Member

We check compilation elsewhere, adding this option significantly speeds up saved model testing. Roughly 3x for the tf format.

@mattdangerw
Copy link
Member Author

/gcbrun

@mattdangerw mattdangerw force-pushed the no-trace-saving branch 2 times, most recently from bbafd36 to 7f754c9 Compare March 30, 2023 22:00
@mattdangerw
Copy link
Member Author

/gcbrun

Copy link
Contributor

@chenmoneygithub chenmoneygithub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Matt! Should we still keep the large annotation for saving test?

# Check that output matches.
restored_output = restored_model.predict(self.raw_batch)
self.assertAllClose(model_output, restored_output)
self.assertAllClose(model_output, restored_output, atol=0.01, rtol=0.01)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious - why would it fluctuate? 0.01 without context is not a common scale in atol.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not really sure, also this is totally unrelated, so I can split into a separate PR. I was seeing enough fluctuation on my nvidia GPU that sometimes these tests would fail. This is true on master too.

It looks like it is just a precision issue, the floats are the same, with a lax enough tolerance.

@mattdangerw
Copy link
Member Author

/gcbrun

@mattdangerw mattdangerw merged commit ba8ddc5 into keras-team:master Apr 1, 2023
@mattdangerw mattdangerw mentioned this pull request Apr 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants