Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Have mvn.rnd support *, 1xD, * calls as if they were *, D, * #16

Merged
merged 6 commits into from
Nov 27, 2013

Conversation

jucor
Copy link
Contributor

@jucor jucor commented Nov 22, 2013

As I was adding tests where mu is 1xD (instead of NxD or D), torch
has suddenly segfaulted on me -- both qth and th.
@d11, can you please see if you have the same problem with your install?
I'm at a loss to understand what's going on.

(edit: fixed, see below)

Calling with 1xD is supposed to crash the test, but also segfaults.
@jucor
Copy link
Contributor Author

jucor commented Nov 22, 2013

OK, I've avoided the segfault by passing v1:clone() to the tests. What triggered the segfault is that the result tensor got resized from NxD to 1xD during the course of the systematic tests. Of course, while this was indeed a bug, it should not make torch segfault, but it does. At least now that's fixed, we see the tests failing as they should.

Hi @d11, I am not sure here: can you explain why these 5 tests
were not expected to error? I am almost sure that they should 
throw an error, since E != D and M != D. In the upcoming new parsing
of the dimensions, it does error.

We might have a mismatch in our two expectations of the function's behaviour :)
Can we check that when I come back on Tuesday, please?
The generated systematic tests pass :)
I am not sure if I should review the case without result tensor: I
haven't touched it.
@jucor
Copy link
Contributor Author

jucor commented Nov 22, 2013

Hi @d11
This is a big refactor of the arguments parsing in mvn.rnd. Could you please see what you think of it? Especially commit ad510ac, where there are 5 systematic tests whose former behaviour I do not understand: do I misunderstand something when I think they should error?

@d11
Copy link
Contributor

d11 commented Nov 25, 2013

Hmm, yes, I was rather undecided on the behaviour in these cases. It comes from the logic that when it is possible to infer the desired output size from the parameters, we should always resize the given output tensor accordingly. It depends whether we want to just treat the output tensor as a block of memory to be written to or whether we want to enforce that the user has given it a sensible size.
Since you found the current behaviour to be unexpected I think throwing an error in these cases is reasonable, and the potential for catching confusing bugs probably outweighs the slight inconvenience of having to resize the output tensor explicitly…
Cheers
Dan

On 22 Nov 2013, at 20:05, Julien Cornebise notifications@github.com wrote:

Hi @d11
This is a big refactor of the arguments parsing in mvn.rnd. Could you please see what you think of it? Especially commit ad510ac, where there are 5 systematic tests whose former behaviour I do not understand: do I misunderstand something when I think they should error?


Reply to this email directly or view it on GitHub.

@d11
Copy link
Contributor

d11 commented Nov 25, 2013

Hmm, I'm trying your unit test, without the fix, but I've not actually been able to reproduce the segfault so far. Would be interested to try this when you're back! Perhaps it's torch-version-dependent. But the changes all seem good to me, anyhow. :)

@jucor
Copy link
Contributor Author

jucor commented Nov 26, 2013

Interesting ! I'm back today, we can compare the torch commit I've used to have the segfault. Or we can move on :)

jucor pushed a commit that referenced this pull request Nov 27, 2013
Have mvn.rnd support *, 1xD, * calls as if they were *, D, *
@jucor jucor merged commit 44ca47a into master Nov 27, 2013
@jucor jucor deleted the fixGaussianCall branch November 27, 2013 14:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants