-
Notifications
You must be signed in to change notification settings - Fork 370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reorganisation Fixes #120
Reorganisation Fixes #120
Conversation
"y_train shape is: (6294, 6, 6)\n", | ||
"x_test shape is: (1058, 6, 48, 9)\n", | ||
"y_test shape is: (1058, 6, 6)\n" | ||
"x_train shape is: (6409, 6, 48, 9)\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pushkalkatara Why did the x_train shape change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@harsha-simhadri Due to data generation script, the shapes on each processing is varying. i.e
Processing data
Extracting features
('subinstanceLen', 48)
('subinstanceStride', 16)
('sourceDir', '/home/pushkalkatara/mr/EdgeML/examples/tf/EMI-RNN/HAR//RAW/')
('outDir', '/home/pushkalkatara/mr/EdgeML/examples/tf/EMI-RNN/HAR//48_16/')
Num train 6339
Num test 2947
Num val 1013
Done
Processing data
Extracting features
('subinstanceLen', 48)
('subinstanceStride', 16)
('sourceDir', '/home/pushkalkatara/mr/EdgeML/examples/tf/EMI-RNN/HAR//RAW/')
('outDir', '/home/pushkalkatara/mr/EdgeML/examples/tf/EMI-RNN/HAR//48_16/')
x_train 6335
Num train 6335
Num test 2947
Num val 1017
Done
Technically this should work. However I had some bad experience using
clone.detach where gradient did flow through. Can you please check/debug
before you commit.
Aditya
…On Mon, Aug 19, 2019, 9:18 AM Pushkal Katara ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In edgeml_pytorch/trainer/bonsaiTrainer.py
<#120 (comment)>:
> @@ -126,13 +126,13 @@ def runHardThrsd(self):
__thrsdT).to(self.device)
self.__thrsdW = torch.FloatTensor(
- np.copy(__thrsdW)).to(self.device)
Maybe we can use
self.__thrsdW = torch.FloatTensor(
__thrsdW.clone().detach().to(self.device)
I believe it would make a copy of __thrsdW(with data and reference in the
computational graph) and further, .detach() would remove the link too and
disable differentiation.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#120?email_source=notifications&email_token=AECUG3JRKTFNC5YOSOGQG7DQFIJYNA5CNFSM4IMQ3OLKYY3PNVWWK3TUL52HS4DFWFIHK3DMKJSXC5LFON2FEZLWNFSXPKTDN5WW2ZLOORPWSZGOCB4HT5Y#discussion_r315030417>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AECUG3IKTWE7JXRVWTZ36MLQFIJYNANCNFSM4IMQ3OLA>
.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pushkalkatara can we add two more num params. One for the biases and one for scalars.
num_biases, num_scalars
@adityakusupati Where should I add the num params? |
edgeml_pytorch/graph/rnn.py
Outdated
@@ -70,16 +70,16 @@ def __init__(self, input_size, hidden_size, | |||
self._hidden_size = hidden_size | |||
self._gate_nonlinearity = gate_nonlinearity | |||
self._update_nonlinearity = update_nonlinearity | |||
#self._num_weight_matrices = num_weight_matrices | |||
#self._num_weight_matrices = [1,1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it better to set it to None, as opposed to [1,1]? @adityakusupati
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@harsha-simhadri that line has been commented out, but it should be None.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can remove it also, as self._num_weight_matrices
is assigned here https://github.com/pushkalkatara/EdgeML/blob/8a1189065b09a8c84abebb8e2250a7c3c2bb571d/edgeml_pytorch/graph/rnn.py#L228
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pushkalkatara should be part of the base RNN class hence we need to assign None here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I'll commit the change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pushkalkatara I made some suggestion in a new review please do those change and I will do a comprehensive review today.
@pushkalkatara |
@harsha-simhadri @adityakusupati I am not able to test SRNN as the script
Most probably enough memory is not available on my system to generate the maxlen size zeros numpy array. |
edgeml_pytorch/graph/rnn.py
Outdated
if uRank is not None: | ||
self._num_U_matrices += 1 | ||
self._num_weight_matrices[1] = self._num_U_matrices | ||
if uRank and wRank: | ||
self._num_biases += 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pushkalkatara num_biases is independent on uRank and wRank but rather dependent on the bias parameters which look like self.bias_*. Please update this accordingly. For FastGRNN it will be 2 bias terms, FastRNN has 1, UGRNN as 2, GRU has 3 and LSTM has 4, simple RNN has 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also please check getVars() function to see what we are trying to do with this num_biases. This counting list is a easy way to access getVars()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I just noticed it. I'll make the changes accordingly.
edgeml_pytorch/graph/rnn.py
Outdated
@@ -70,16 +70,16 @@ def __init__(self, input_size, hidden_size, | |||
self._hidden_size = hidden_size | |||
self._gate_nonlinearity = gate_nonlinearity | |||
self._update_nonlinearity = update_nonlinearity | |||
#self._num_weight_matrices = num_weight_matrices | |||
#self._num_weight_matrices = [1,1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pushkalkatara I made some suggestion in a new review please do those change and I will do a comprehensive review today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please fix the rest of the cells as well. Thanks.
All others have the correct num_biases in the constructor. |
Lets hold off on porting EMIRNN to PyTorch just yet. @metastableB Any clue what might be going wrong here? |
@harsha-simhadri @pushkalkatara Yes lets hold of on porting EMI-RNN for now. That will require a lot of care. I'll fix the |
While testing fastcell_example.py
unexpected keyword gate_non_linearity |
@SachinG007 The fix is in the commit 08a3826 in this PR |
@harsha-simhadri this PR looks good to me in the context of Bonsai and FastCells. Please, approve. @pushkalkatara thanks for your contributions. |
@metastableB Should we wait for your fix or go ahead with PR? |
process_google.py needs to be fixed while working with small memory. But lets do another PR for that. Lets just go ahead now |
Thanks for the merge. |
Checked all examples and fixed a few issues.