-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
poisson likelihood with different priors #70
Comments
I think the argument corresponding to "+uniform" or "halfuniform" should be "mixcompdist=..." instead of "prior=..." This is indeed confusing. The argument "prior" actually means if we add more weight on the mode's point mass component ("nullbiased") or treat all components equally ("uniform"). Sorry for the confusion! |
Ah I see. It works now! I think this had also confused me in the past before. On top of that, I was looking at the ash help file where the prior input is not described. However, ash.workhorse describes both prior and mixcomdist. Thanks so much @mengyin! |
@jhsiao999 Would it help to revise the documentation for |
@pcarbo I think the documentation for |
@mengyin I got this error message while running the mode = 0 and mixcomdist = +uniform option. Do I need to tune the parameters accordingly? "consider reducing rtolestimated mixing distribution has some negative values:" |
@jhsiao999 I'm not entirely sure the difference---I suspect there is a historical reason for having two functions, but I agree it is confusing. At the very least, we can combine the two help pages for @stephens999 What do you think, is this a good idea? |
@jhsiao999 Could you send me your data so I can check it and debug? Thanks! |
@jhsiao999 If the data aren't large you can also attach it to a Comment here. |
@mengyin Here's the data. The code I used: |
ash and ash.workhorse are the same -
only difference is how many options are documented.
i just thought there were too many options for your average user
for what is essentially a simple function. So only the simple options
are documented in ash. To see all the options you have to
look at ash.workhorse.
Happy to discuss, but I do think it is nice if the function
has limited options for a novice user...
…On Wed, May 24, 2017 at 3:33 PM, Peter Carbonetto ***@***.***> wrote:
@jhsiao999 <https://github.com/jhsiao999> I'm not entirely sure the
difference---I suspect there is a historical reason for having two
functions, but I agree it is confusing.
At the very least, we can combine the two help pages for ash and
ash.workhorse.
@stephens999 <https://github.com/stephens999> What do you think, is this
a good idea?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#70 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABt4xcM5bQdXDmaeopZuwShE1CNkJ2aYks5r9JQpgaJpZM4NllaG>
.
|
maybe we should introduce ash_pois , ash_binom etc for ash with other (non-normal and non-t) likelihoods? |
Introducing ash_XXX makes more sense to me. Now we have to put betahat=0 and sebetahat=1 for non-normal/t likelihoods, which seems really confusing... |
@stephens999 I was only suggesting we combine the help pages for But now that you have brought it up, I don't think that having a large number of arguments is an obstacle to using So I don't think it is necessary to have separate functions
I don't think it is necessary, but I can see some benefits in doing so. We could consider That being said, I don't think we should abandon having a general interface I agree with both of you that the default settings sometimes don't make sense, but here I think the solution is to revise the interface, and the argument defaults, rather than to abandon the general interface entirely. Matthew touched on the general solution:
Yes, and the solution is to set the defaults based on the choice of likelihood and/or mixture prior, which I see is something that you are already doing, e.g.,
Another option, if there are a lot of rarely used or specialized arguments, is to have a catch-all argument called "optionss". This strategy is used a lot; e.g., see argument |
thanks @pcarbo . I agree with much of what you say. I'm happy to combine the help for ash and ash.workhorse. I think one benefit of having @mengyin can you draft a simple function ash.pois that calls ash.workhorse with a poisson likelihood and sensible defaults? At the same time can you |
@stephens999 But do we want I did some research on this (here, here, here & here) and the consensus is that there is no consensus, and it appears that there is even debate within the "R Core Team". It seems that underscores are safer, but dots are mostly safe, too, and a lot of people don't like underscores (Hadley seems to be the lone advocate). So either one is fine with me. |
I chose .pois to be consistent with ash.workhorse and smash.pois I generally try to use _ but here it seems when you have "variants" of a procedure the . Thus get_lfsr but ash.workhorse |
@jhsiao999 I also got the warnings "REBayes::KWDual(A, rep(1, k), normalize(w), control = list(rtol = 0.1)) : estimated mixing distribution has some negative values:consider reducing rtol" I check the data but I'm not sure why the optimization procedure gave negative mixture proportions (sorry!) But in mix_opt.R, line 45-48 force the estimated mixture proportions to be positive, so I guess @stephens999 found this problem and decided to manually fix it in this way? |
@mengyin @jhsiao999 Wei encountered this problem before. It is indeed strange that it occasionally generates negative mixing proportions. It should not do this. Right now we have a temporary solution (set negative proportions to 0) but perhaps in the future we will come up with a better solution. |
@mengyin @jhsiao999 you could try reducing rtol as it suggests (in control =list(rtol=...)) |
@pcarbo Thanks for the explanation! Maybe in this case we can use optmethod="mixEM" instead? For this example the fitted proportions of "mixIP" (after setting negative proportions to 0) and "mixEM" can be quite different, so I'm a bit worried... @stephens999 I tried reducing rtol (even to 0.1), but it still gives the warning... |
@mengyin I'm not sure 0.1 is actually small (or what the default is) it was just an example. Id try 1e-8 Which method gives the higher look likelihood and how different are they? |
Sorry, that was loglikelihood |
Sorry i should make rtol smaller... now I try rtol=1e-8 and even smaller values (1e-9, 1e-10...), but the results don't change. Anyway mixip's loglikelihood is still lower than that of mixem.
The fitted proportions are different: mixem's fitted g is very concentrated on the last third component (pi=9.999999e-01), but mixip's fitted g is more spread out.
$a $b attr(,"row.names")
|
@mengyin I got errors with these two cases:
Case 1:
set.seed(100)
l = rexp(1)
x = rpois(100,lambd=l)
ashr::ash(rep(0,100),1,lik=ashr::lik_pois(x), mode=0, prior = "+uniform")
Case 2:
set.seed(100)
l = rexp(1)
x = rpois(100,lambd=l)
ashr::ash(rep(0,100),1,lik=ashr::lik_pois(x), mode= "estimate", prior = "halfuniform")
Both returned the error message: "Error in match.arg(prior) : 'arg' should be one of “nullbiased”, “uniform”, “unit”". And when prior = "uniform", the code runs okay with no errors. Thanks so much!
The text was updated successfully, but these errors were encountered: