-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generalize convolutions for both initial and final states #172
Comments
@cschwan You're way too efficient. The idea is to have a dedicated meeting to discuss this in detail. |
The implementation might take a while 😄. But I anticipate to introduce a new file format soon to accommodate a number of changes/simplications (see #118) and it would be ideal to know what changes roughly are required to support them in said file format. I think the meeting would indeed be perfect to discuss that point. |
@enocera: which of the following scenarios do you see to be relevant in the future:
Basically my questions is: how many 'convolutions' (with PDFs and/or fragmentation functions) will we need to support in PineAPPL? |
I guess there is also SIA case, i.e. "lepton-lepton collisions with *". |
What does SIA stand for? |
semi-inclusive annihilation |
Yes. This is called semi-inclusive deep-inelastic scattering (SIDIS)
No. This is a process that occurs, but it cannot be described within collinear factorisation. The object that is introduced is called di-hadron fragmentation function. Let's forget about it.
Yes. This is relevant for the LHC.
No. For the same reason as in 2. On top of these processes, there is hadron production in electron-positron annihilation (SIA=single-inclusive annihilation), so one FF (and no PDF).
Here's a summary: |
I had a brief chat with @t7phy yesterday, and there are a couple of points that require answering:
|
Sets of fragmentation functions are available through LHAPDF in the very same format as PDFs.
In general muR=muF=muFrag, but we may want to do scalel variations (in order to estimate MHOUs) exactly as we do with PDFs.
The convolution is the same, though of course different objects evolve differently. Specifically, there are the folloiwng case:
We must use different EKOs (time-like evolution), but these are already available in EKO, in the same format as space-like EKOs. |
Hi @cschwan,
The FFs are delivered in the LHAPDF format (you can see for example the NNFFXX sets from the LHAPDF webpage). The structure is exactly the same as in the (un)polarised and nuclear PDFs.
The convolution is also done in the exact same say. At the end of the day, it is always just a sum over different flavor combinations.
The structure of the FK tables also should be the same as in the (un)polarised (n)PDFs. EKO can be used in the same way as for PDFs but using time-like evolution. This has already been implemented in EKO (NNPDF/eko#232, NNPDF/eko#245), for this to fully work in the pipeline it is just missing the link in |
I'd start from the simplest case (SIA), and then possibly move to SIDIS. I think that SIDIS may be similar to the case of proton-ion collisions, in which one has to convolve a proton PDF and a nuclear PDF; in SIDIS, one has to convolve a proton PDF and a FF (the difference being that proton and nuclear PDFs evolve with the same EKOS, while proton PDFs and FF evolve with different EKOs). |
Ah, @enocera has already answered all of your questions in the meantime. |
@enocera was 1 min faster then me 🙃 and @Radonirinaunimi half a minute
just to add: for SIDIS we will still have 2 collinear objects (one PDF and one FF) but 3 scales: µF + µR + µFrag |
Which we may want to vary independently. |
If we do a 9-pt scale variation, we'd probably want to do a 27-pt ( |
Correct.
A 16-pt variation, in which we exclude the outmost variations in opposite directions, see e.g. Eq. (2) in https://arxiv.org/pdf/1311.1415.pdf and the discussion in https://arxiv.org/pdf/1001.4082.pdf. |
Apart from the scale variations bit, it should be just Moreover, SIA should not require even any other scale, since you could abuse the |
@cschwan I use this issue here instead of #135 to ask a more technical question - feel free to correct me Do you have already a strategy on how to support more convoloutions? as discussed above we need up to 3; broadly speaking I can see two strategies:
trivial statement: a single grid will have a fixed number of collinear distributions |
I thought about all of the related problem long and hard, and right now I'm thinking of doing the following:
|
as discussed above, it is unlikely we will need more then 3 collinear dimensions in the mid term, so I'm not sure you want to opt for the most general case, given the generated complexity ((very-)long term there are always more options á là double parton scattering)
so that would be basically my strategy 1, right? as for the nuclear stuff: we still need to state what kind of PDF we expect, i.e. what is accounted for inside the grid and what should the PDF account for (but maybe @Radonirinaunimi knows better)
going iteratively is a good idea, I think - also from an EKO point of view ...each of the intermediate objects would still be a grid, as also an FK table is a grid (with special properties, but still a grid) |
My point is: for three dimensions I basically already have to think about the general case, so the step from 3 to D is trivial (I think, but let's see)
It's probably not much more work, and solving a more general case will hopefully give us better abstractions.
Yes! |
Let's start with the following:
The first line is for backwards-compatibility (and should be dropped in the future), and the following lines is what we need to distinguish between various different convolution functions that may describe the same particle though. For instance, the (unpolarized) proton PDFs, the polarized proton PDF and the proton fragmentation function. @t7phy you need to change the lines here: Lines 1325 to 1329 in 1954e5c
|
the hadron fragmentation function (might by chance be still proton, but the interesting cases are pions, Kaons, Ds, ...) |
It was an example 😃. |
@Radonirinaunimi I added you here, because your pull request adresses one of the TODOs listed above. |
@enocera if we produce FK-tables for predictions involving both PDFs and FFs, are we interested in different fitting scales for the PDFs and FFs? |
@cschwan that's a good question. In general I'd say that we want to keep the parametrisation scale uniform across different objects. But I see cases in which we may want to have different parametrisation scales for different processes (e.g. unpolarised PDFs and FFs). I would however see the second option as a sophistication of the first one, therefore I'd go for the first one to start with, unless the implementation of the second is straightforward. |
This generalization will require the following changes:
SparseArray3
structure; we needSparseArray4
, and then it's possibly better to even think aboutSparseArrayN
; this is done with Add new typePackedArray
#275.Grid::evolve*
) needs to be changed in order to support time-like EKOs. For that to work we should discuss the possible interfaces and probably first merge Add method(s) to support larger EKOs #244; partly addressed in Accommodate for two different EKOs in Evolution #289. The remaining bits will be addressed in Implement v1 file format [WIP] #299.The text was updated successfully, but these errors were encountered: