-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Looping over tensor / using symbolic variable. #23
Comments
Right, this functionality currently isn't implemented. Unfortunately, the I think the right solution to this problem is to have some mechanism for branching/looping that gets built into the "execution graph" -- the final data structure used to perform the computation. The other temporary solution, which would work, is to make careful use of "masks", which zero out some components of the data or recurrent state and allow you train on variable-length inputs in batch form. Edit: using switch would work (though there's overhead), you'd just have to clip the index so you don't get an out of bounds error. The folowing hacky solution almost works (one just needs to implement output = (X.shape[0]<=t) * make_step(X[cgt.maximum(t, X.shape[0]-1)], output) + (X.shape[0]>t) * output |
Thanks. is it possible to implement a lazy |
Not yet, because of how the graph execution works. I'm going to open up an issue for further discussion on this topic -- @hojonathanho and I have discussed it a bit. |
Hello, if I'm understanding the code correctly, currently there is no way of doing loops with symbolic number of iterations (or looping over leading dimension of tensor like in ScanOP).
Are there plans to add such functionality to CGT? If not, what would be the recommended way of processing variable-length sequences? Is there a Switch operator, so that one could write:
But wouldn't it create a huge overhead if the difference between shortest and longest sequences in the dataset is large?
Hope it makes sense to post this question here instead of the mailing list.
The text was updated successfully, but these errors were encountered: