-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
seed #6162
seed #6162
Conversation
@ryparmar Thanks for the PR! Just a few things: Regarding the implementation of manual_seed, the original torch method returns a generator. However, Ivy.seed returns None. Please make sure that it returns a generator. Regarding the testing of manual_seed, make sure that you are testing the entire range of inputs for the original torch manual_seed. Let me know if that makes sense and if you have any more questions or if I have misunderstood something. Thanks! |
Hi @CerberusLatrans, Thanks for the CR! :) ad 3 I keep getting Do you have any idea why is the numpy seed called instead of the torch one? EDIT: oh, I did not realized I should run the pytest with specified backend ( EDIT2: Ok, sorry for the confusion. It was my bad. I should run that with |
Hi @CerberusLatrans, I have checked the failing tests and those were already failing before. Not sure whether I am supposed to create an issue for that (I haven't found any existing). Please, let me know if anything else is needed. :) |
Hi @ryparmar, Sorry if my wording was confusing-- I meant to make sure that just the torch frontend manual_seed returns a generator. The functional ivy and backends (including torch backend) should still return None (this might require undoing some changes). From my understanding, torch.random.manual_seed both sets the seed (which is the functionality that calling whichever backend will take care of) and returns a generator. Therefore, a solution my be for the torch frontend to first call ivy.seed, and then return a generator without relying on the backends. However, it might be more optimal to support superset behavior of torch's manual_seed (in which case all backends would return generators), but don't worry about this for now. I will ask some other engineers about this. Regarding the test, that makes sense numpy backend-- the test is fine (though I don't know how it actually will test the function if the backends return None haha). Let me know if you have any more questions. Thanks! |
Oh, I see now! Thanks for the clarification! :) Agree, definitely relevant question to be asked. Haha, yeah, it does not really test the functionality in terms of outputs. Though, it tests whether the function is callable at least. Tbh, I don't know what Ivy's approach is to testing these functions (return None) - maybe it would be a good question to be asked too. EDIT: I just asked on the Discord forum whether this kind of function needs to be tested. Plus, after undoing my changes, I observed some errors when running the frontend test that seems to be related to some confusion between typing.Generator and torch.Generator. So, I am not pushing the changes yet, since it might be a best idea to simply not testing this function at all. |
I would say to just hold off on making any changes to the test right now, so you can keep them even if failing. I have been told that ivy is now using "functional approach where all the random module functions take in a seed argument" instead of setting global seeds (since jax does not rely on global seeds). Regarding the typing.Generator vs torch.Generator, it might be better to go with the typing.Generator (which I assume is what you did since the test is failing) so that it is agnostic of the chosen backend framework. Again-- don't worry about the failing test for this one. When it is more clear, someone can always go back and reimplement or remove the tests. |
Thanks for the info! The test didn't pass with either the torch or the typing generator, so I left the torch one in. EDIT: Not sure about the lint / check-formatting test, though. It failed on the file that was not even changed, so it is probably caused by some change in the gh action. (it should be already fixed, see here) |
Don't worry about the lint since that's from some else's commit. Just one last thing-- from looking at the source code (https://pytorch.org/docs/stable/_modules/torch/random.html#manual_seed) for torch.manual seed, they return a generator with the generator.manual_seed(seed) method called on it. We may need this so that the generator returned actually uses the seed passed into our function (https://pytorch.org/docs/stable/generated/torch.Generator.html). Let me know what you think. Thanks. |
Great point, thanks for pointing that out! :) |
Great! I will merge it. Thanks for the contribution @ryparmar. There will probably be some changes down the line regarding testing, but for now this will suffice. |
Close #6118