Skip to content

Latest commit

 

History

History
507 lines (326 loc) · 22.6 KB

RunningSModelS.rst

File metadata and controls

507 lines (326 loc) · 22.6 KB

Using SModelS

Using SModelS

SModelS can take SLHA or LHE files as input (see Basic Input <BasicInput>). It ships with a command-line tool runSModelS.py <runSModelS>, which reports on the SMS decomposition <Decomposition> and theory predictions <TheoryPredictions> in several output formats <smodelsOutput>.

For users more familiar with Python and the SModelS basics, an example code Example.py <exampleCode> is provided showing how to access the main SModelS functionalities: decomposition <Decomposition>, the database <Database> and computation of theory predictions <TheoryPredictions>.

The command-line tool (runSModelS.py <runSModelS>) and the example Python code (Example.py <exampleCode>) are described below.

Note

For non-MSSM (incl. non-SUSY) input models the user needs to write their own model.py file and specify which BSM particles are even or odd under the assumed Z2 symmetry (see adding new particles <newParticles>). From version 1.2.0 onwards it is also necessary to define the BSM particle quantum numbers in the same file1.

runSModelS.py

runSModelS.py covers several different applications of the SModelS functionality, with the option of turning various features on or off, as well as setting the basic parameters <parameterFile>. These functionalities include detailed checks of input SLHA files, running the decomposition <Decomposition>, evaluating the theory predictions <TheoryPredictions> and comparing them to the experimental limits available in the database <Database>, determining missing topologies <topCoverage> and printing the output <smodelsOutput> in several available formats.

Starting on v1.1, runSModelS.py is equipped with two additional functionalities. First, it can process a folder containing a set of SLHA or LHE file, second, it supports parallelization of this input folder.

The usage of runSModelS is:

A typical usage example is: :

runSModelS.py -f inputFiles/slha/simplyGluino.slha -p parameters.ini -o ./ -v warning

The resulting output <smodelsOutput> will be generated in the current folder, according to the printer options set in the parameters file <parameterFile>.

The Parameters File

The basic options and parameters used by runSModelS.py are defined in the parameters file. An example parameter file, including all available parameters together with a short description, is stored in parameters.ini <images/parameters.ini>. If no parameter file is specified, the default parameters stored in smodels/etc/parameters_default.ini <images/parameters_default.ini> are used. Below we give more detailed information about each entry in the parameters file.

  • options: main options for turning SModelS features on or off
  • checkInput (True/False): if True, runSModelS.py will run the file check tool <fileChecks> on the input file and verify if the input contains all the necessary information.
  • doInvisible (True/False): turns invisible compression <invComp> on or off during the decomposition <Decomposition>.
  • doCompress (True/False): turns mass compression <massComp> on or off during the decomposition <Decomposition>.
  • computeStatistics (True/False): turns the likelihood and χ2 computation on or off (see likelihood calculation <likelihoodCalc>). If True, the likelihood and χ2 values are computed for the EM-type results <EMtype>.
  • testCoverage (True/False): set to True to run the coverage <topCoverage> tool.
  • combineSRs (True/False): set to True to use, whenever available, covariance matrices to combine signal regions. NB this might take a few secs per point. Set to False to use only the most sensitive signal region (faster!). Available v1.1.3 onwards.
  • particles: defines the particle content of the BSM model
  • model: pathname to the Python file that defines the particle content of the BSM model, given either in Unix file notation ("/path/to/model.py") or as Python module path ("path.to.model"). Defaults to share.models.mssm which is a standard MSSM. See smodels/share/models folder for more examples. Directory name can be omitted; in that case, the current working directory as well as smodels/share/models are searched for.
  • parameters: basic parameter values for running SModelS
  • sigmacut (float): minimum value for an element <element> weight (in fb). Elements <element> with a weight below sigmacut are neglected during the decomposition <Decomposition> of SLHA files (see Minimum Decomposition Weight <minweight>). The default value is 0.03 fb. Note that, depending on the input model, the running time may increase considerably if sigmacut is too low, while too large values might eliminate relevant elements <element>.
  • minmassgap (float): maximum value of the mass difference (in GeV) for perfoming mass compression <massComp>. Only used if doCompress = True
  • maxcond (float): maximum allowed value (in the [0,1] interval) for the violation of upper limit conditions <ULconditions>. A zero value means the conditions are strictly enforced, while 1 means the conditions are never enforced. Only relevant for printing the output summary <fileOut>.
  • ncpus (int): number of CPUs. When processing multiple SLHA/LHE files, SModelS can run in a parallelized fashion, splitting up the input files in equal chunks. ncpus = -1 parallelizes to as many processes as number of CPU cores of the machine. Default value is 1. Warning: python already parallelizes many tasks internally.
  • database: allows for selection of a subset of experimental results <ExpResult> from the database <Database>
  • path: the absolute (or relative) path to the database <databaseStruct>. The user can supply either the directory name of the database, or the path to the pickle file <databasePickle>. Since v1.1.3, also http addresses may be given, e.g. http://smodels.hephy.at/database/official113. See the github database release page for a list of public database versions.
  • analyses (list of results): set to ['all'] to use all available results. If a list of experimental analyses <ExpResult> is given, only these will be used. For instance, setting analyses = CMS-PAS-SUS-13-008,ATLAS-CONF-2013-024 will only use the experimental results <ExpResult> from CMS-PAS-SUS-13-008 and ATLAS-CONF-2013-024. Wildcards (, ?, [<list-of-or'ed-letters>]) are expanded in the same way the shell does wildcard expansion for file names. So analyses = CMS leads to evaluation of results from the CMS-experiment only, for example. SUS selects everything containining SUS, no matter if from CMS or ATLAS. Furthermore selection of analyses can be confined on their centre-of-mass energy with a suffix beginning with a colon and an energy string in unum-style, like :13*TeV. Note that the asterisk behind the colon is not a wildcard. :13, :13TeV and :13 TeV are also understood but discouraged.
  • txnames (list of topologies): set to ['all'] to use all available simplified model topologies <topology>. The topologies <topology> are labeled according to the txname convention <TxName>. If a list of txnames <TxName> are given, only the corresponding topologies <topology> will be considered. For instance, setting txnames = T2 will only consider experimental results <ExpResult> for pp →  +  → (jet + χ̃10) + (jet + χ̃10) and the output <smodelsOutput> will only contain constraints for this topology. A list of all topologies <topology> and their corresponding txnames <TxName> can be found here Wildcards (*, ?, [<list-of-or'ed-letters>]) are expanded in the same way the shell does wildcard expansion for file names. So, for example, txnames = T[12]bb picks all txnames beginning with T1 or T2 and containg bb as of the time of writing were: T1bbbb, T1bbbt, T1bbqq, T1bbtt, T2bb, T2bbWW, T2bbWWoff
  • dataselector (list of datasets): set to ['all'] to use all available data sets<DataSet>. If dataselector = upperLimit (efficiencyMap), only UL-type results <ULtype> (EM-type results <EMtype>) will be used. Furthermore, if a list of signal regions (data sets<DataSet>) is given, only the experimental results <ExpResult> containing these datasets will be used. For instance, if dataselector = SRA mCT150,SRA mCT200, only these signal regions will be used. Wildcards (*, ?, [<list-of-or'ed-letters>]) are expanded in the same way the shell does wildcard expansion for file names. Wildcard examples are given above.
  • dataTypes dataType of the analysis (all, efficiencyMap or upperLimit). Can be wildcarded with usual shell wildcards: * ? [<list-of-or'ed-letters>]. Wildcard examples are given above.
  • printer: main options for the output <smodelsOutput> format
  • outputType (list of outputs): use to list all the output formats to be generated. Available output formats are: summary, stdout, log, python, xml, slha.
  • stdout-printer: options for the stdout or log printer
  • printDatabase (True/False): set to True to print the list of selected experimental results <ExpResult> to stdout.
  • addAnaInfo (True/False): set to True to include detailed information about the txnames <TxName> tested by each experimental result <ExpResult>. Only used if printDatabase=True.
  • printDecomp (True/False): set to True to print basic information from the decomposition <Decomposition> (topologies <topology>, total weights, ...).
  • addElementInfo (True/False): set to True to include detailed information about the elements <element> generated by the decomposition <Decomposition>. Only used if printDecomp=True.
  • printExtendedResults (True/False): set to True to print extended information about the theory predictions <TheoryPredictions>, including the PIDs of the particles contributing to the predicted cross section, their masses and the expected upper limit (if available).
  • addCoverageID (True/False): set to True to print the list of element IDs contributing to each missing topology (see coverage <topCoverage>). Only used if testCoverage = True. This option should be used along with addElementInfo = True so the user can precisely identify which elements were classified as missing.
  • summary-printer: options for the summary printer
  • expandedSummary (True/False): set True to include in the summary output all applicable experimental results <ExpResult>, False for only the strongest one.
  • python-printer: options for the Python printer
  • addElementList (True/False): set True to include in the Python output all information about all elements <element> generated in the decomposition <Decomposition>. If set to True the output file can be quite large.
  • addTxWeights (True/False): set True to print the contribution from individual topologies to each theory prediction. Available v1.1.3 onwards.
  • xml-printer: options for the xml printer
  • addElementList (True/False): set True to include in the xml output all information about all elements <element> generated in the decomposition <Decomposition>. If set to True the output file can be quite large.
  • addTxWeights (True/False): set True to print the contribution from individual topologies to each theory prediction. Available v1.1.3 onwards.

The Output

The results of runSModelS.py <runSModelS> are printed to the format(s) specified by the outputType in the parameters file <parameterFile>. The following formats are available:

  • a human-readable screen output (stdout) <screenOut> or log output <logOut>. These are intended to provide detailed information about the database <Database>, the decomposition <Decomposition>, the theory predictions <TheoryPredictions> and the missing topologies <topCoverage>. The output complexity can be controlled through several options in the parameters file <parameterFile>. Due to its size, this output is not suitable for storing the results from a large scan, being more appropriate for a single file input.
  • a human-readable text file output containing a summary of the output <fileOut>. This format contains the main SModelS results: the theory predictions <TheoryPredictions> and the missing topologies <topCoverage>. It can be used for a large scan, since the output can be made quite compact, using the options in the parameters file <parameterFile>.
  • a python dictionary <pyOut> printed to a file containing information about the decomposition <Decomposition>, the theory predictions <TheoryPredictions> and the missing topologies <topCoverage>. The output can be significantly long, if all options in the parameters file <parameterFile> are set to True. However this output can be easily imported to a Python enviroment, making it easy to access the desired information. For users familiar with the Python language this is the recommended format.
  • a xml file <pyOut> containing information about the decomposition <Decomposition>, the theory predictions <TheoryPredictions> and the missing topologies <topCoverage>. The output can be significantly long, if all options are set to True. Due to its broad usage, the xml output can be easily converted to the user's preferred format.
  • a SLHA file <slhaOut> containing information about the theory predictions <TheoryPredictions> and the missing topologies <topCoverage>. The output follows a SLHA-type format and contains a summary of the most constraining results and the missed topologies.

A detailed explanation of the information contained in each type of output is given in SModels Output <outputDescription>.

Example.py

Although runSModelS.py <runSModelS> provides the main SModelS features with a command line interface, users more familiar with Python and the SModelS language may prefer to write their own main program. A simple example code for this purpose is provided in examples/Example.py. Below we go step-by-step through this example code:

  • Import the SModelS modules and methods. If the example code file is not located in the smodels installation folder, simply add "sys.path.append(<smodels installation path>)" before importing smodels. Set SModelS verbosity level.

/examples/Example.py

  • Set the path to the database URL. Specify which database <databaseStruct> to use. It can be the path to the smodels-database folder, the path to a pickle file <databasePickle> or (starting with v1.1.3) a URL path.

/examples/Example.py

  • Define the input model. By default SModelS assumes the MSSM particle content. For using SModelS with a different particle content, the user must define the new particle content and set modelFile to the path of the model file (see particles:model in Parameter File <parameterFile>).

/examples/Example.py

  • Path to the input file. Specify the location of the input file. It must be a SLHA or LHE file (see Basic Input <BasicInput>).

/examples/Example.py

  • Set main options for decomposition <Decomposition>. Specify the values of sigmacut <minweight> and minmassgap <massComp>:

/examples/Example.py

  • Decompose <Decomposition> model. Depending on the type of input format, choose either the slhaDecomposer.decompose or lheDecomposer.decompose method. The doCompress and doInvisible options turn the mass compression <massComp> and invisible compression <invComp> on/off.

/examples/Example.py

/examples/Example.py

output:

/images/ExampleOutput.txt

  • Load the experimental results<ExpResult> to be used to constrain the input model. Here, all results are used:

/examples/Example.py

Alternatively, the getExpResults method can take as arguments specific results to be loaded.

  • Print basic information about the results loaded. Below we show how to count the number of UL-type results <ULtype> and EM-type results <EMtype> loaded:

/examples/Example.py

output:

/images/ExampleOutput.txt

  • Compute the theory predictions <TheoryPredictions> for each experimental result<ExpResult>. The output is a list of theory prediction objects (for each experimental result<ExpResult>):

/examples/Example.py

  • Print the results. For each experimental result<ExpResult>, loop over the corresponding theory predictions <TheoryPredictions> and print the relevant information:

/examples/Example.py

output:

/images/ExampleOutput.txt

  • Get the corresponding upper limit. This value can be compared to the theory prediction <TheoryPredictions> to decide whether a model is excluded or not:

/examples/Example.py

output:

/images/ExampleOutput.txt

  • Print the r-value, i.e. the ratio theory prediction <TheoryPredictions>/upper limit. A value of r ≥ 1 means that an experimental result excludes the input model. For EM-type results <EMtype> also compute the χ2 and likelihood <likelihoodCalc>. Determine the most constraining result:

/examples/Example.py

output:

/images/ExampleOutput.txt

  • Print the most constraining experimental result. Using the largest r-value, determine if the model has been excluded or not by the selected experimental results<ExpResult>:

/examples/Example.py

output:

/images/ExampleOutput.txt

  • Identify missing topologies. Using the output from decomposition, identify the missing topologies <topCoverage> and print some basic information:

/examples/Example.py

output:

/images/ExampleOutput.txt

It is worth noting that SModelS does not include any statistical treatment for the results, for instance, correction factors like the "look elsewhere effect". Due to this, the results are claimed to be "likely excluded" in the output.

Notes:
  • For an SLHA input file <BasicInput>, the decays of final states <final statesEven> (or Z2-even particles such as the Higgs, W,...) are always ignored during the decomposition. Furthermore, if there are two cross sections at different calculation order (say LO and NLO) for the same process, only the highest order is used.
  • The list of elements <element> can be extremely long. Try setting addElementInfo = False and/or printDecomp = False to obtain a smaller output.
  • A comment of caution is in order regarding naively using the highest r-value reported by SModelS, as this does not necessarily come from the most sensitive analysis. For a rigorous statistical interpretation, one should use the r-value of the result with the highest expected r (rexp). Unfortunately, for UL-type results <ULtype>, the expected limits are often not available; rexp is then reported as N/A in the SModelS output.

  1. We note that SLHA files including decay tables and cross sections, together with the corresponding model.py, can conveniently be generated via the SModelS-micrOMEGAS interface, see arXiv:1606.03834