New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fixed two-sample permutation testing (# 323) #347
Conversation
Thanks - can you think of a test? @jwirsich - can you somehow give us the results of your test against the PERM r-package so we can make sure the results continue to be correct? |
The test is working (and travis passed: i think that there is a clash between different travis runs ?). |
I can give you here the R-code I used to test from a matlab variable containing 2 n x m arrays x and y, though I did not test for tmax correction. library('perm')
library(R.matlab)
data <- readMat(path)
x1 <- as.matrix(data$x)
y1 <- as.matrix(data$y)
for(i in 1:length(x1[1,])) {
DV <- c(x1[,i], y1[,i])
IV <- factor(rep(c("A", "B"), c(length(x1[,i]), length(y1[,i]))))
valuer = permTS(DV~IV, alternative="greater", method="exact.mc", control=permControl(nmc=10^3-1))$estimate
if (valuer < 0.05) {
print(paste(i, ' ' , valuer, sep = ''))
}
} |
@jwirsich - thank you - that's very helpful. Sorry to punish your kindness like this, but would you consider sending the x, y vector you used, say saved as a matlab .mat file? |
OK - sorry about this - I am afraid I don't understand what I am doing. But, here are my scripts:
and
and
I run these in IPython:
This is comparing the R estimate, with the two-sample test Tvalues. Is that the right comparison? Should these be more similar? |
Sorry for the late answer. The test-case looks fine but the r-code uses 1000 permutation the python code seems to use only 100 draws: you might want to change that. Also the r-code uses a right sided t-test (alternative="greater"). You can change the parameter to (alternative="two.sided") to get the twosided estimates. |
Thanks for the feedback. Actually the number of draws doesn't change the values output The same seems to be true for the Bertrand - do you have any comment on whether this test is valid? |
As far as I can see, this looks right. At least, the relative error is small. |
Bertrand - is it possible these tests I've done aren't actually testing the permutation? |
Indeed, you need to invoke the |
Bertrand - are you saying that you aren't confident that the results are correct in general? |
Note exactly: I have added some tests in this PR, that confirm that, roughly speaking, the permutation p-values are correct. So, I don't see why they should be wrong. I was simply saying that i) your test does not check for permutation p-values ii) the In concrete terms, I don't think that the comparison of p-values with those obtained with R is necessary. |
I completely agree that my current tests are not testing the right thing. On the other hand, it would be a shame if we released this version of nipy with code that we thought might give the wrong answer. So it seems it would be worth checking that it is at least roughly right. Comparing against the R package seems like one way of doing that, and doesn't seem like it would be too much work, for someone who understood what the code was doing. |
OK - @jwirsich - unless you'd like to add a test here (that would be great) I'll merge this soon. |
I haven't checked the tmax correction. But I am not sure if it's worth putting more work into this so no problems to merge soon. |
Thanks for the feedback - rebased and merged as 556ec73 |
This is supposed to solve the issue #323.
As pointed out rightly, the code was not doing what it was supposed to do.
Note again that I don't to put any more efforts in it. It is very low quality anyhow.