Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

avoid setting random seed at module level #41

Open
de-code opened this issue Jul 5, 2019 · 1 comment
Open

avoid setting random seed at module level #41

de-code opened this issue Jul 5, 2019 · 1 comment
Assignees

Comments

@de-code
Copy link
Contributor

de-code commented Jul 5, 2019

It is generally preferable if module level code doesn't have side-effects. i.e. just importing a module shouldn't change anything (there may be few exceptions). It would be better if the seed was set by the main method for example.

e.g.

>>> np.random.seed(123)
>>> np.random.get_state()[1][0]
123
>>> import delft.sequenceLabelling.data_generator
Using TensorFlow backend.
>>> np.random.get_state()[1][0]
7

At the end, I would expect the seed to be the same.

(It's not a big issue as there is a simple workaround)

@lfoppiano lfoppiano self-assigned this Dec 5, 2019
@lfoppiano
Copy link
Collaborator

I wonder if we should fix this, the whole issue here is that numpy has these seeds that are static. Whenever you set them I guess they will be modifying the internal state. If you, for example, instantiate the data _generator the seed will be modified.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants