Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Several changes (new LRT macro + small changes on AppLC) #89

Merged
merged 15 commits into from
Jun 2, 2016

Conversation

mireianievas
Copy link
Contributor

Add a macro that computes the 'likelihood' for a given model

  • Based on the testmodel macro, but with different goals and a bit more flexibility regarding the parameters to use.
  • The aim was to be able to compare likelihoods of models given by external software and compute their LRT.
  • Allow the possibility to specify AppLC radius (more flexibility might come at a later commit).

@davidsanchez
Copy link
Member

Hi
thanks for the works. This seems very interesting. I am trying it.

Could you please make a bit of doc for the lrt macro and some example. this will help me to understand

thanks

@davidsanchez
Copy link
Member

Hi again,

another point. I test your new enrico_testmodel. I ran PL and LogParabola analysis and get different loglikelihood values with your modifications.

could you check that please

thanks

Add some comments to the code, include examples in the documentation.
Reeplace very long variables in testModel with shortcuts to improve readability.
@mireianievas
Copy link
Contributor Author

mireianievas commented Apr 19, 2016

Hi, I have included some documentation and examples, cleaned up a little bit the code (they were very old commits and I honestly forgot about them when I submitted the pull request ;) ).

The usage of the macro is dead simple (just enrico_lrt EnricoConfigFile FileWithListOfModels).

FileWithListOfModels should contain a list of models to test:
LogParabola, None, 1.571613825, 0.06406550853
LogParabola, None, 1.871613825, 6.550853e-05
LogParabola, None, 0.771613825, 0.1406550853

You can leave some of the parameters non-fixed by writing down None as you see in this example with the normalization factor.

The macro fetches the models from that FileWithListOfModels, makes the fit (it creates some pickle files to be able to reuse these fits in a later execution so that the process is speed up significantly), computes the likelihood and saves it to an output file (TestModel/TestModel.results). It should contain something like

LogParabola, None, 1.571613825, 0.06406550853, -102184.111142
LogParabola, None, 1.871613825, 6.550853e-05, -102297.840182
LogParabola, None, 0.771613825, 0.1406550853, -101757.551261

@mireianievas
Copy link
Contributor Author

Hi,

I think I found the problem. One of the changes I implemented was to 'thaw' the background parameters to give the total model a little bit more freedom. I have partially reverted it, only if the user specifies the parameters to test (i.e. by using the model_list file and enrico_lrt) the background models will be thawed.

I also commented (for now) the pickling of the existing fit results, as I have noticed that the behaviour is not reproducible (results differ between the 'clean' execution and the 'recovered fit' one). I need to think a way to speed up the process (if one wants to connect this tool with external fitting tools that would work on other experiments to compute a combined likelihood).

@davidsanchez
Copy link
Member

Hi,

I am not sure that this lines
print("Freezing not interesting model parameters")
for i in range(len(self.Fit.model.params)):
if self.Fit.model[i].isFree():
self.Fit.freeze(i)

in the function RunAFit are good. To me, you have to leave the parameters free to make a comparison

I will have a look at the doc

cheers
david

@mireianievas
Copy link
Contributor Author

With this I was trying to see how much can the likelihood be speed up (this
was a particular case and to be honest I just forgot to comment them). But
yes, in the general case you don't want to fix them. It's safe to comment
them.

2016-04-22 5:40 GMT-04:00 davidsanchez notifications@github.com:

Hi,

I am not sure that this lines
print("Freezing not interesting model parameters")
for i in range(len(self.Fit.model.params)):
if self.Fit.model[i].isFree():
self.Fit.freeze(i)

in the function RunAFit are good. To me, you have to leave the parameters
free to make a comparison

I will have a look at the doc

cheers
david


You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
#89 (comment)

@davidsanchez
Copy link
Member

Could you remove them and change your pull request

thanks

@mireianievas
Copy link
Contributor Author

Done! Sorry for noticing this before!

@davidsanchez davidsanchez merged commit a4b09ad into gammapy:master Jun 2, 2016
@davidsanchez
Copy link
Member

thanks a lot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants