SOLO: Saturation physics at One Loop Order
This is the program used to calculate the complete next-to-leading cross section for inclusive hadron production in pA collisions, described in the paper
Anna M. Stasto, Bo-Wen Xiao, David Zaslavsky
"Towards the Test of Saturation Physics Beyond Leading Logarithm"
Phys. Rev. Lett. 112, 012302 (2014)
Please cite this paper if you use the results of the code in a publication.
Official documentation for the program is kept at https://diazona.github.io/SOLO/.
SOLO is written and maintained by me, David Zaslavsky. I no longer work in academia, so the code is not actively being developed, but I'm happy to answer questions about the program to the best of my ability. You can email me with questions about SOLO at email@example.com.
The quickest and intended way to compile the code is as follows: first, ensure that git, GSL, MuParser, and CMake are properly installed (as well as a C++ compiler such as GCC). Then, in a shell, run the following commands:
git clone https://github.com/diazona/SOLO.git cd SOLO git submodule init git submodule update
Then, download the MSTW PDF code from http://mstwpdf.hepforge.org/code/code.html.
Extract the files
mstwpdf.h from the tarball and place
them in SOLO's
src/ directory. (We are not authorized to distribute
the MSTW PDF interface as part of SOLO, which is why this has to be done
manually.) Then, from the directory
SOLO/ (the parent
src/), run these commands:
mkdir build cd build cmake .. && make
At the end of this you should have a
build/src/ directory containing the
oneloopcalc and other programs.
Other ways of installing the program (e.g. without git, or without Cmake) are described in the documentation.
Running the program
In order to run the program, you will need two additional files:
The grid file for the MSTW 2008 PDF at NLO, from the paper
A. D. Martin, W. J. Stirling, R. S. Thorne and G. Watt,
"Parton distributions for the LHC",
Eur. Phys. J. C 63 (2009) 189-285
The filename is
mstw2008nlo.00.dat, and it can be downloaded as part of an
archive at the MSTW PDF site http://mstwpdf.hepforge.org/code/code.html.
The data file for the DSS fragmentation functions at NLO, from the paper
Daniel de Florian, Rodolfo Sassot, Marco Stratmann
"Global analysis of fragmentation functions for pions and kaons and their uncertainties"
Phys. Rev. D 75, 114010 (2007)
The filename is
PINLO.DAT. Unfortunately we (authors of SOLO) are not aware of
a website where this file is directly available.
The program is invoked as
where the can include any number of the following, in any order:
Hard factor group specifications
These tell the program which terms to calculate. A hard factor group specification is made of any number of individual hard factor specifications separated by commas, for example
The program will calculate the results for all the terms in the group and display a total for each group. You can name a group by prefixing the specification with a colon, like
The name will be used to label a column in the results table printed when the program finishes.
An individual hard factor specification is a string like "p.h02qq" or "m.h16gg". The "p." at the beginning specifies the position space implementation, "r." specifies a position space implementation with the angular integral already done, and the "m." specifies the momentum space implementation. The prefix can be omitted, in which case position space is taken as the default. (Not recommended, as position space is highly inaccurate for some terms.) For compatibility with older versions, the program also accepts a colon instead of the period (like "p:h02qq").
The rest of the string gives the name of a hard factor. The canonical set of possible names that can be used with a "p." prefix are all the return values from the get_name() methods in hardfactors_position.h, and similarly for "r." with hardfactors_radial.h and "m." with hardfactors_momentum.h. Here's a near-complete list:
p.h02qq m.h02qq p.h12qq r.h12qq p.h14qq m.h14qq p.h02gg m.h02gg p.h12gg r.h12gg p.h12qqbar m.h12qqbar p.h16gg m.h16gg p.h112qg r.h112qg p.h122qg r.h122qg p.h14qg m.h14qg p.h112gq r.h112gq p.h122gq r.h122gq p.h14gq m.h14gq
and also these, which go beyond what is in the paper:
r.h12qq.1 r.h12qq.1A r.h12qq.1B r.h12qq.2 r.h12qq.3 r.h012qqexp m.h1qqexact m.h1ggexact
The names are case-insensitive.
It's also possible to specify the group of all leading order terms using the shortcut "lo", which is equivalent to
or the group of all next-to-leading order terms using the shortcut "nlo.std", which is equivalent to
or the group of all next-to-leading order terms using the high-pT expansion for the diagonal channels using the shortcut "nlo.hipt", which is equivalent to
The shortcut "nlo" will choose between these latter two options automatically: "nlo.std" if approximate kinematics are in use (exact_kinematics = 0 in the Context), or "nlo.hipt" if exact kinematics are in use (exact_kinematics = 1).
These shortcuts are defined in oneloopcalc.cpp. The default if no hard factor groups are specified on the command line is
Configuration file names
Configuration files contain parameters for the program, in the format
and so on. Keys are case-insensitive. The canonical list of keys which are used is the code in
context.cpp. Here's a mostly-complete list:
the mass number
the absolute error at which to stop an integration, for strategies which use this termination condition
value for the fixed coupling
11 - 2*Nf/3)
coefficient for the LO running coupling
the centrality coefficient, 0-1
if the factorization scale scheme is c0r, whether to skip calculating terms that should be zero
the color factor
"fixed" or "running"
number of calls to use for cubature integration
whether to use exact kinematic expressions
fixed" or "
4pT2" or "
CpT2" or "
c0r" to specify how to set the factorization scale
if factorization_scale is "CpT2", this is the coefficient to multiply by pT2 to get mu2
filename to read DSS FF data from
the anomalous dimension in the MV gluon distribution
file to read the momentum data for a gluon distribution from
file to read the position data for a gluon distribution from
number of subdivisions to use when integrating a position gluon distribution
the type of the gluon distribution, "
file", or "
the type of hadron detected, "
pi0", or "
the cutoff used for integration over a theoretically infinite region
the integration type to use, "
VEGAS" (best), or "
the exponent in the definition of the saturation scale
the parameter in the MV gluon distribution, in GeV
lambdaQCD(default 0.2428711 = sqrt(0.0588))
QCD lambda in GeV, used in the running coupling
miser_iterations(default 10000000 = 1e7)
number of iterations to use in MISER integration
factorization scale in GeV, if factorization_scale is "fixed"
number of colors
number of flavors
filename to read MSTW PDF from
the type of projectile, "deuteron" or "proton"
seed for the GSL random number generator
algorithm to use for generating random numbers; allowed values are in the GSL documentation
comma-separated list of transverse momenta
algorithm to use for generating quasirandom numbers for QMC integration; allowed values are in the GSL documentation
the number of iterations at which to stop quasi Monte Carlo integration
the position of the Landau pole for the regulated LO running coupling
the relative error at which to stop an integration, for strategies which use this termination condition
extract from momentum)
for a file gluon distribution, how to extract the saturation scale; allowed values are "
analytic" (Q0²(x0/x)^λ), "
extract from momentum" which determines the saturation scale by finding the momentum where the gluon distribution equals a fixed fraction of its value at a reference momentum, and "
extract from position" which finds the radius where the gluon distribution equals a fixed threshold value
if satscale_source is "
extract from momentum" or "
extract from position", this is the fixed threshold value (or fraction of its value at a reference point, in the momentum case) that the gluon distribution should equal at the saturation scale
cross-sectional area of the hadron
sqrt(s), the collider's CM energy
number of function evaluations to use in each step of the VEGAS Monte Carlo algorithm after the first
number of function evaluations to use to refine the grid in the first step of the VEGAS algorithm
the fit parameter from the definition of the saturation scale
comma-separated list of rapidities (in the center of mass frame) to run the calculation at
The configuration files have to at least set
Y, and also
pTif no transverse momenta are specified as command line arguments.
Transverse momentum values
Any numbers given as command line arguments are put together into one big list of transverse momentum values to run the calculation at. If a comma-separated list of numbers is given, then it will be split apart and each number added to the one big list. There's no significance to putting certain
pTvalues together and others not. (
0.5 0.7 0.8,0.9and
0.5,0.7 0.8 0.9are exactly equivalent.) Any
pTvalues specified on the command line will replace
pTvalues specified in the config file, if there is one.
Print out the results for each individual hard factor, not just the total for each hard factor groups
Track and print out the minimum and maximum values of kinematic variables
Print out parameters and values for every call to the gluon distribution. Output goes to the file trace_gdist.output in the working directory. (Expect this file to grow to several hundred megabytes.)
Print out selected variables from the integration context after every single evaluation of the function. The output goes to the file
trace.outputin the working directory. (Expect this file to grow to several megabytes.) The allowable variables are those in
ictx_var_list.inc, or you can use "
--trace=all" or "
--trace=*" to print out all available variables.
Basically the code runs as follows:
Collect the command line options and settings from configuration files and put everything into a
For each combination of
Y, and for each hard factor group:
Integratorwith the current values of
yand the current hard factor group
The Integrator calls the GSL Monte Carlo integration routine
For each time the MC routine evaluates the function
Update the variables in the
Go through the list of
HardFactorinstances in the current group and get a value from each one
Return the total value
d. Store the value and error bound returned from the Monte Carlo
Print out all the results
Source code for the program itself:
Declares an output stream to write status messages to
Declares an exception to be thrown when GSL reports an error
A class that abstractly represents a hard factor (i.e. an expression to be integrated)
Implementation of the momentum space hard factors (terms)
Implementation of the position space hard factors (terms)
Implementations of the gluon distributions
A program to print out values from the gluon distributions
Implementations of the fixed and LO running couplings
Implementations of the various schemes for the factorization scale
A class that stores the kinematic variables used in the calculation. The values stored in this get updated every time the function is evaluated.
A class that stores the parameters for the integral and actually calls the GSL Monte Carlo integration functions
Definitions of integration types. An integration type specifies how many dimensions are in the Monte Carlo integral and what the limits are.
Some string and list processing functions
Variables from the integration context, listed in a separate file as a preprocessor hack of sorts
Source code and object code for other things used by the program:
A C++ interface to the DSS fragmentation functions
Test program for the DSS FF interface
A 2D interpolation library compatible with the GSL Full source code at https://github.com/diazona/interp2d
A library for quasi Monte Carlo integration compatible with the GSL Full source code at https://github.com/diazona/quasimontecarlo
Source code for other things used by the program, written by other people:
A C++ interface to the MSTW PDFs
A library for multidimensional cubature (deterministic integration) Not currently used
Non-source code files:
Instructions for the build system, CMake
DSS fragmentation function data
DSS fragmentation function data with extrapolation to lower z
MSTW PDF data