Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setting prefix for composite models #121

Closed
tritemio opened this issue Sep 8, 2014 · 14 comments
Closed

Setting prefix for composite models #121

tritemio opened this issue Sep 8, 2014 · 14 comments

Comments

@tritemio
Copy link
Contributor

tritemio commented Sep 8, 2014

We recently changed Model.prefix to a property.

In composite models we allow setting the prefix but we never use it:

from lmfit.models import GaussianModel

p1 = GaussianModel(prefix='a')
p2 = GaussianModel(prefix='b')

model = p1 + p2
print model.prefx
''

model.prefx = 'A_'
print model.prefix
'A_'

print model.param_names
{'aamplitude', 'acenter', 'asigma', 'bamplitude', 'bcenter', 'bsigma'}

print model.make_params().keys()
['asigma',
 'acenter',
 'aamplitude',
 'afwhm',
 'bsigma',
 'bcenter',
 'bamplitude',
 'bfwhm']

I think the prefix setter should raise a warning and not set the prefix for composite models. Setting the prefix makes sense only for base models in current implementation IMHO. We have only base models and composite models. We don't have composite models made of composite models (the components are always split up to the base model). I think this approach is good. But makes composite models prefix of little use (if any).

Comments?

@newville
Copy link
Member

newville commented Sep 8, 2014

I could be wrong (and writing from a phone, away from computer) but the prefix might be useful for added params as from param_hints. For example, you could set a midpoint between two Gaussian centers or similar. I think that might make the composite prefix worth keeping, though not vital.

@tritemio
Copy link
Contributor Author

tritemio commented Sep 8, 2014

Sorry, can you give an example?

@newville
Copy link
Member

newville commented Sep 8, 2014

How about:

v = VoigtModel(prefix='v_')
l = LorentzianModel(prefix='l_')

m = v + l
m.prefix = 'total_'
m.set_param_hint('center', expr='(l_center+v_center)/2')

One interesting side effect is that if one then adds another component:

bkg = QuadraticModel(prefix='bkg_')
newmodel = m + bkg

that hinted parameter total_center from the intermediate composite will still exist. And one can easily compare the total_center from the two models.

Yes, one could fully name the hinted-into-existence parameters, but I don't see why the prefix approach should be disallowed.

@danielballan
Copy link
Member

I'm not sure I follow that perfectly. What does newmodel.make_params() look like?

@newville
Copy link
Member

newville commented Sep 8, 2014

It gives:

>>> for par in newmodel.make_params().values():
>>>     print par
<Parameter 'v_sigma', 1.0, bounds=[0:None]>
<Parameter 'v_amplitude', 1.0, bounds=[None:None]>
<Parameter 'v_center', 0.0, bounds=[None:None]>
<Parameter 'v_fwhm', None, bounds=[None:None], expr='3.6013100*v_sigma'>
<Parameter 'v_gamma', None, bounds=[None:None], expr='v_sigma'>
<Parameter 'l_center', 0.0, bounds=[None:None]>
<Parameter 'l_amplitude', 1.0, bounds=[None:None]>
<Parameter 'l_sigma', 1.0, bounds=[0:None]>
<Parameter 'l_fwhm', None, bounds=[None:None], expr='2.0000000*l_sigma'>
<Parameter 'bkg_a', None, bounds=[None:None]> 
<Parameter 'bkg_b', None, bounds=[None:None]>
<Parameter 'bkg_c', None, bounds=[None:None]>
<Parameter 'total_center', None, bounds=[None:None], expr='(l_center+v_center)/2'>

so that total_center is still defined from what you might call the "intermediate composite" or the "model without background". It may not be necessary, but I don't see a reason to prevent this usage.

@tritemio
Copy link
Contributor Author

tritemio commented Sep 8, 2014

I wrote:

We have only base models and composite models. We don't have composite models made of composite models (the components are always split up to the base model).

and I was wrong. We can end up with composite models containing composite models:

p1 = GaussianModel(prefix='a')
p2 = GaussianModel(prefix='b')
p3 = GaussianModel(prefix='c')
p4 = GaussianModel(prefix='d')

model = p1 + p2
model2 = p3 + p4

model3 = model + model2

print model3.components
[<lmfit.Model: gaussian(prefix='A')>,
 <lmfit.Model: gaussian(prefix='b')>,
 <lmfit.Model: gaussian(prefix='c')+gaussian(prefix='d')>]

It maybe even work when fitting but I find it confusing. I think we should stick with one level of composition.

And returning to Matt example, it works because one model is composite and is actually extended by adding a new base-model component. So the hints and prefix of the composite model are preserved.

But if we allow composing (adding) two composite models we should:

  1. Build a flat composite model that includes all the base components of "other"
  2. Merge the hints of the "other" composite model
  3. handle the case with two composite models with two different prefix

Points 1 and 2 are easy to implement. But how do we handle point 3?

@tritemio
Copy link
Contributor Author

tritemio commented Sep 8, 2014

Food for thoughts: we should probably think of treating prefix and hints with the same logic. However hints can never conflict because the parameters are always disjoints.

@newville
Copy link
Member

newville commented Sep 8, 2014

I don't see a fundamental problem with nested models (and I think they ought to work), but adding a check in Model.__add__() for "is other composite" and then flattening the result seems fine.

Should point 3 read "handle the case with two composite models have the same prefix"? (If not, I don't see a problem). If so, I'm OK with leaving that as "allowed but probably not what you wanted to do".

My guess is that we'll hear many more reports of unintended usages after release than we can ever come up with ourselves. I also think that's OK. The Model class is kind of a big addition so I think it is OK to expect that we haven't gotten all aspects perfect.

@newville
Copy link
Member

newville commented Sep 8, 2014

@tritemio

Food for thoughts: we should probably think of treating prefix and hints with the same logic.

how so? or What logic is that?

However hints can never conflict because the parameters are always disjoints.

Hints are applied to parameters, possibly implying the creation of new parameters. I think hints might be able to conflict with one another (if you worked at it), but so what?

@tritemio
Copy link
Contributor Author

tritemio commented Sep 8, 2014

Prefix on composite models are only used for parameters from hints. The name of those parameters is compute on-fly using self.prefix. Questions:

  • When adding 2 composite models with 2 different prefixes, what self.prefix should the resulting model have?
  • And if both models have parameters defined by hints, should their name change prefix? (I suppose no)
  • If each hint-added parameter should retain the prefix of its original composite model, should we build a mapping to know which parameter uses which prefix? (I suppose no)

So it seems to me that the easy case is when the two prefixes are the same. In that case, you are right, we could have a conflicting hint on a hint-defined parameter (not on the other parameters). But this can handled I guess by dropping one hint and issuing a warning. I don't know how to handle the case of different prefixes (either than dropping one and warning the user).

@newville
Copy link
Member

newville commented Sep 9, 2014

Concrete examples might help. Without clear examples of obvious problems, I'm -0.5 (willing to be out-voted) on adding much code to deal with such what-if cases.

When building composites or copying models, there may be some non-obvious behavior for parameter hints, but there will be some behavior, and one can always alter the hints after the fact, or alter the parameters for that matter. If the behavior with the current code is clearly wrong it should be fixed, otherwise, probably not. Trying to predict and handle every corner case is not worth the effort.

@tritemio
Copy link
Contributor Author

tritemio commented Sep 9, 2014

Ok, I see your point. I made a PR (#122) that only flattens the components and merges the hints.

There is this corner case of two composite model with different prefixes, but it can be handled later if it really hurts somebody.

@newville
Copy link
Member

newville commented Sep 9, 2014

is this issue close-able?

@tritemio
Copy link
Contributor Author

tritemio commented Sep 9, 2014

Yes.

@tritemio tritemio closed this as completed Sep 9, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants