New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi io #30
Multi io #30
Conversation
Hi,
Thank you. |
in
We constrain it a little bit, no dictionaries. But it is getting a bit too verbose. I'll remove it. Should I remove it in
I didn't see an elegant way the first time around to make sure everything everywhere has the same batch size, but I'll look into it again.
Sure thing. |
Yeah, I meant
We constrain the types in fit, evaluate and predict but we do not in the *_on_batch and *_generator methods as far as I can tell.
The change you did is fine. |
Do we directly constrain it? I may be having a major brain fart, I don't see where it's done. Those types are assumed, but not directly enforced, are they? |
In the fit, evaluate and predict methods, your |
Ahh gotcha! Ok yeah makes sense. |
Hi,
Thank you. |
Sure thing. And damn vim plugin, a bit too eager on the file header. |
Pull request merged! I might be a good idea to try make a pull request to PyTorch with the extended version of the TensorDataset class. Merci bien pour ta collaboration. |
The core modification was to create a new TensorData that would recursively fetch the proper index value and a function that can aggregate batches of output as one big output.
I disabled one test
test_disable_batch_size_warning
as I am not sure if it's required anymore. I modified_get_batch_size
to account for this new input/output structure, but I am not sure what the original intention was.