-
I hacked together a custom modifier for a polynomial "shapefactor" using the features recently added to the pyhf master branch by @lukasheinrich (thanks a lot for implementing those!). A lot of the interfaces will likely soon be spelled out in docstrings once #1641 is implemented, so I try to limit myself to specific points that I find hard to guess from skimming through the code:
Finally I have similar questions not about modifiers, but rather some (nearly) undocumented pyhf internals:
Sorry about the length! Answers to only one or some of the question will still be quite helpful! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Lines 50 to 63 in 121903b
That partially depends on the behavior of the optimizer itself. We don't check input parameters based on the bounds in
Not sure what you mean "by key" here. The fundamental point is that two modifiers of the same type can exist on a sample if they have different names. If they have the same name, we take the last defined modifier type/name (#1899) which is a bug and needs to be fixed. If two modifiers of the same type have different names on the same sample, that's also fine. But there is a relatively fundamental assumption that you can't have multiple modifiers of the same type and name on the same sample.
Yes please. Try to be as consistent in the API as possible. We'll end up trying to add typing to all of this, and likely we'll need to add
Directly meaning what? Lukas can comment better, but we need to sometimes modify or concatenate after the fact after we build up combined modifiers. Until you call
Why do you need bin-centers? HistFactory doesn't care about the actual binning/bin edges, just that you have n-bins. More generally, not sure what the best way is. We'll have to see what you want to do and approach it on a case-by-case basis... definitely avoid globals though.
Yes, it's a bit convoluted. It's needed here Lines 543 to 547 in acde7f4 pyhf/src/pyhf/modifiers/__init__.py Lines 39 to 47 in acde7f4
It should. Needs to be fixed in #1641.
There can be broadcasting depending on the batch size as well.
From a user perspective: >>> import pyhf
>>> from pyhf import tensorlib
>>> pyhf.set_backend('jax')
>>> tensorlib
<pyhf.tensor.numpy_backend.numpy_backend object at 0x10f96f300>
>>> pyhf.tensorlib
<pyhf.tensor.jax_backend.jax_backend object at 0x10f9633c0> Note that users won't explicitly import Now,
This is when you may or may not have batched input (see my answer above) so you initialize a ParamViewer to allow you to extract the parameters in a potentially-batched way without caring if you're batched or not.
The indices in the potentially batched
Yeah. Can you file an issue for that? |
Beta Was this translation helpful? Give feedback.
We haven't figured out the general pattern yet. This is likely something that wil…