Skip to content

Get your Physics started with Ant

Sascha Wagner edited this page Jun 17, 2019 · 25 revisions

Contents

Introduction

First a word of caution: Ant is not the standard ROOT C Macro Code kind of framework you might be familiar with, but rather avails itself of the advantages of the object-orientated way and coding patterns C++ offers. You can still write your own Physics class in the old-fashioned C Macro style. But if you want to dig a bit deeper to understand methods you're using or how the heck Ant works, be aware that you will encounter C++ code utilizing the std library, lots of new concepts introduced with C++11, as well as (depending on how far you go) heavy use of templates and generics.

Main Layout

If you've not been deterred from dealing with Ant yet, let's start with the main structure a typical Physics class provides you with. The base class every Physics class is based on is defined in analysis/physics/Physics.h. It implements a default constructor and destructor, as well as some virtual void methods you're supposed to implement in your own derived Physics class. The Finish() and ShowResult() methods are optional and you do not have to implement them if you don't need them or want to. ProcessEvent() on the other hand is mandatory. This is the main method you work in which loops over every single event reconstructed from the files provided. More on this later.

Creating your own Physics Class

In order to create your own Physics class, you should first choose an appropriate name which ideally involves briefly what you're going to do. To get an idea of which name to choose, you can have a look at the various files living in analysis/physics and its subfolders. If you just want to play around or do not know where to put it, you can also just create a folder with your name in scratch and place everything in there.

If you chose a good name, e.g. Tutorial, you start to write your Physics class by creating the needed header and C++ files, in this case Tutorial.h and Tutorial.cc. We start with the header file, Tutorial.h. In there you start with #pragma once, a different way of writing #include guards where you don't have to do anything else like #ifndef TUTORIAL_H and so on. You #include "physics/Physics.h", the base physics class, as well. In Ant, namespaces are used quite often to group classes in a (more or less) logical way. The physics classes live in ant::analysis::physics, hence we create it:

namespace ant {
namespace analysis {
namespace physics {

Followed by this we derive our own class from the base class included earlier. The minimal things needed look like this:

class Tutorial : public Physics {
public:
    Tutorial(const std::string& name, OptionsPtr opts);
    virtual void ProcessEvent(const TEvent& event, manager_t& manager) override;
private:
    TH1D* h_nClusters;
};

The first line we derive our own class. Next in the public: section goes the constructor and the only method provided by Physics we need to implement to run our own Physics, ProcessEvent(). We mark this method with the keyword override in the end to explicitely tell the compiler we rewrite this method. The other ones mentioned earlier are not really necessary and as long as we do not declare them, the parent methods are used. Following at the end I just declared a 1D histogram to indicate where you might want to put those declarations.

Last but not least we close the nested namespace we opened previously by adding }}} at the end. That's all we need in our header file to start with our own Physics class.

How to Physics

Now that we created our header file Tutorial.h, we can start writing our Tutorial.cc file which contains all the interesting stuff we want to do with the events provided by Ant. A minimal set of instructions our Tutorial.cc needs to contain is the following:

#include "Tutorial.h"

using namespace std;
using namespace ant;
using namespace ant::analysis;
using namespace ant::analysis::physics;

Tutorial::Tutorial(const string& name, OptionsPtr opts) :
    Physics(name, opts)
{
}

void Tutorial::ProcessEvent(const TEvent& event, manager_t&)
{
    // the physics will happen here
}

AUTO_REGISTER_PHYSICS(Tutorial)

A short breakdown of what is done: First we include the header file we created. Followed by that we tell the compiler which namespaces we want to use. That's not really necessary, but might turn out pretty convenient if you can just write cout << "some output" << endl; instead of std::cout << "some output" << std::endl; or calling classes and/or methods directly by their name like run_method() or SomePhysicsClass() instead of prepending ant::analysis::physics all the time. Programmers and Physicists are lazy. Once this is done, we start implementing the two methods we defined in the header file: the contructor and ProcessEvent(). In the constructor we do not do anything so far, we just pass the name and the options to the base class to handle some stuff for us, like setting the name of our class and check for some options. The way the information is passed to the base class with calling the constructor after the colon of our own constructor is called member initializer list. It's just a faster way of initializing the base class compared to something like Tutorial::Turorial(const string& name, OptionsPtr opts) { Physics(name, opts); } and is used nearly everywhere within Ant.

Before we get to the interesting part, let's quickly cover the last part, AUTO_REGISTER_PHYSICS(Tutorial). This is some macro we call at the end of the Physics class which registers the class in Ant. This means you can invoke your physics class by calling Ant -p Tutorial without further ado, and it's listed in the Ant help dialogue (Ant -h).

Now to the most interesting part: ProcessEvent(). Ant basically loops over all events in the provided file, reconstructs the raw information, applies calibration constants and corrections, and fills the information in the TEvent structure. This is basically just a container to group the TEventData structures which hold all the information you want to access. There's one called MCTrue which contains the generated information from your event generator, the not yet simulated event information before they're passed to a2geant. The other and probably more interesting one is called Reconstructed and contains all information Ant could reconstruct from the raw information, either from a raw data file or the simulated output of Geant.

To get an idea of what is stored and how, you might want to look at the header file of TEventData. A nice way to navigate the files and learn more of the structures or objects you encounter, is by using QtCreator (the suggested way to use Ant and explained in How to Install) and navigate to the corresponding source files to learn what they contain. In there you find a thing called TID. This is the event identifier and should be unique for all events you see, a neat way to identify single events in Ant if you need to. You find a vector DetectorReadHits which stores the raw hit information read from file as well as the decoded and calibrated values for each detector hit. TaggerHits stores all the individual Tagger hits recorded during an event. The Trigger conditions can be accessed as well as all reconstructed Clusters in all detectors and Candidates, already matched cluster information from the different detectors to group for example a CB cluster with a matching PID hit. Click through the different classes to learn more about what's stored and how to access it.

CMake

The build system used for Ant is CMake. In most of the directories you encounter files called CMakeLists.txt which contain the information on how to compile the code. If you created a new Physics class you want to include it to the build system in order to automatically compile it together with Ant. Therefore you need to modify the corresponding CMakeLists.txt in analysis/physics or the subdirectory you placed your files in. Usually it should be sufficient to append a line like add_ant_physics(Tutorial.cc) to the existing ones and rerun cmake. Have a look at the existing files to see how it's done for other classes.

Event Loop

Since we covered the basics now, we'll have a look at how to actually loop over the events and get information you might want to retrieve. One of the core parts in your event loop will probably be a loop over all Tagger hits. Tagger hits are stored in the Reconstructed part of the event, and in there in the TaggerHits vector. Let's say you want to first get an idea of how the Tagger timing distribution looks like. For this we first need a histogram to store the timing information. We declare it in the header file in the private: section like mentioned earlier. Let's call it TH1D* h_TaggerTime;. Because declaring histograms is always similar in ROOT and if you want to store them in a file, maybe in different directories, some of the boring part is abstracted away in a helper class called HistogramFactory. If you have a look at the base physics class, you find it declared with the name HistFac. This histogram factory manages all your histograms and takes care of writing them to the output file you specified when calling Ant. We have to initialize the declared histogram pointer before the events are processed, so we go to the still empty constructor and fill it:

Tutorial::Tutorial(const string& name, OptionsPtr opts) :
    Physics(name, opts)
{
    const BinSettings tagger_time_bins(2000, -200, 200);

    h_TaggerTime = HistFac.makeTH1D("Tagger Time",     // title
                                    "t [ns]", "#",     // xlabel, ylabel
                                    tagger_time_bins,  // our binnings
                                    "h_TaggerTime"     // ROOT object name, auto-generated if omitted
                                    );
}

We prepared the histogram now, so let's get back to our ProcessEvent() and loop over all Tagger hits:

void Tutorial::ProcessEvent(const TEvent& event, manager_t&)
{
    for (const auto& taggerhit : event.Reconstructed().TaggerHits) {
        h_TaggerTime->Fill(taggerhit.Time);
    }
}

That's all to loop over the reconstructed Tagger hits and fill a histogram to get the timing distribution.

Remember the pointer we declared in the beginning, the histogram h_nClusters? Why not try to fill this information. Since the pointer for the histogram is declared already, we initialize it in the constructor, like the Tagger timing histogram.

h_nClusters = HistFac.makeTH1D("Number of Clusters", "nClusters", "#", BinSettings(15), "h_nClusters");

If you call the class BinSettings with only one argument, it's basically the same as calling BinSettings(15, 0, 15). Just a neat way to write shorter code if you want to count something. As the number of clusters is independent from the Tagger hits, we fill the histogram outside the Tagger loop we implemented before. To fill in the amount of clusters, you have to add the following line before or after the loop:

h_nClusters->Fill(event.Reconstructed().Clusters.size());

Show histograms after finishing a file

What Ant does right now if you run the Tutorial Physics class as described so far is to loop over all events, fill two histograms, write the histograms to a file once finished, and exit to a interactive ROOT shell where you can browse the file with a TBrowser. If Ant should show you the filled histograms once its finished, the method ShowResult() is what you're looking for. To use it, we first indicate in our header file right after our ProcessEvent() declaration that we rewrite the ShowResult() method:

virtual void ShowResult() override;

In the Tutorial.cc file, we can add the newly declared method and use an Ant class called canvas (a wrapper around TCanvas) to show the two histograms we implemented:

void Tutorial::ShowResult()
{
    ant::canvas(GetName()+": Basic plots")
            << h_TaggerTime
            << h_nClusters
            << endc; // actually draws the canvas
}

If we run Ant again after adding this method and recompiling it, a TCanvas with the name "Tutorial: Basic plots" will show up after the file is processed, consisting of the two histograms.

Take advantage of more of Ant's builtin features

One of the things you usually have to do very often is the in A2 terms called prompt random subtraction, a sideband subtraction for the Tagger time. We created a plot with the Tagger time earlier. If you have a look at it (and the data is calibrated), you should see a peak at 0 with a broad and uniform background. The hits in the peak are the prompt hits in the Tagger, while the flat background are random hits in the Tagger you want to get rid of. To do this, we have a look at a time window around the prompt peak and most of the times to broad windows left and right from the peak to estimate the contribution of random hits, called random windows. In your analysis you then typically fill two histograms, one for the prompt hits and one for the random ones, and then subtract the random contribution after scaling it based on you prompt random ratio.

In Ant there's something called PromptRandomHist which lives in the analysis/plot subdirectory. In there the class Switch is defined in the ant::analysis::PromptRandom namespace. We can use this class to automate the prompt random subtraction. For this you tell the class what your prompt and random windows are and based on this, you get the ratio and can check if a Tagger hit is inside one of those windows. To make use of it, you have to include the class in your header file,

#include "plot/PromptRandomHist.h"

and later in the private: section of your class you create in instance of this class to use it later like this:

PromptRandom::Switch promptrandom;

You only need to tell it the size of your windows once, so we do this in the constructor in the .cc file:

    promptrandom.AddPromptRange({ -7,   7}); // in nanoseconds
    promptrandom.AddRandomRange({-50, -10});
    promptrandom.AddRandomRange({ 10,  50});

Before we use this in our event loop, let's first have a look at another useful class called TriggerSimulation in the analysis/utils folder. This class is used to simulate the Trigger conditions (mainly for MC), but it also implements some useful methods we can use on data to improve our timing. If we just use the Tagger time as it is, the resolution is not bad, but not perfect as well. There's some space for improvement. One thing you can do is to use a relative time between the Tagger hits and the CB hits. To do this, you first need a CB time. To do this, one can sum up all the times in the CB weighted by their cluster energy and normalize this to the total energy:

This reference timing of the CB can be used and subtracted from the Tagger time. This narrows the peak and makes it possible to set a tighter prompt window.

To use this in your Physics class, again first include the class in your header file

#include "utils/TriggerSimulation.h"

and declare a variable in your private: section to use it later:

utils::TriggerSimulation triggersimu;

To run this class and obtain the needed values for each event, we call it at the beginning of our ProcessEvent() method:

void Tutorial::ProcessEvent(const TEvent& event, manager_t&)
{
    triggersimu.ProcessEvent(event);
    ...

This way the CB reference timing is calculated and can be used later.

Now let's make use of the reference timing and the prompt random windows defined earlier. This happens in our loop over the Tagger hits. We extend it to look like the following:

    for (const auto& taggerhit : event.Reconstructed().TaggerHits) {
        promptrandom.SetTaggerTime(triggersimu.GetCorrectedTaggerTime(taggerhit));

        if (promptrandom.State() == PromptRandom::Case::Outside)
            continue;

        const double weight = promptrandom.FillWeight();
        h_TaggerTime->Fill(taggerhit.Time, weight);
    }

What's new? First we pass a time information the promptrandom switch we declared in the header file. The time we pass to it is not the bare Tagger time, but the corrected Tagger time as discussed above. When the time information is given to the PromptRandom::Switch, you can check if the time was inside or outside one of the windows by checking its State(): promptrandom.State(). There are three different states in total: Prompt, Random, and Outside. Outside means the time is neither in the prompt, nor the random windows. In this case we're not interested in this Tagger hit so we just continue the loop with our next Tagger hit.

The last thing is the weight variable. This can be used directly to create prompt random subctracted histograms without declaring several histograms. You can just pass a weight to the Fill() method of a histogram. The weight is positive for prompt events and negative for random events. This way the subtraction takes place on the fly while filling the histogram. The exact value of the number depends on the ratio of the windows you defined in the constructor. It's easy as that. Applying a prompt random subtraction to the Tagger time doesn't really make sense, it's just done here for educational purpose to show the creation of prompt random subtracted histograms.

The Finish() method

One method which was just briefly mentioned yet is the Finish() method. Let's say you create some histograms and accumulate data entries in your ProcessEvent(). But before you want to see resulting histograms, you first want to work on some of the histograms, apply some fancy mathematics or modify data. Then the Finish() method is exactly what you're looking for. It's called right in between the finished of processing all events and showing the results. If you want to implement something there, you have to treat it exactly as the ShowResult() method. To make your Physics class use it, you first have to specify it in the header file:

virtual void Finish() override;

Right next to the ShowResult() defintion. Don't forget to mark it as override since you want to write your own stuff.

The only thing you have to do now is to implement the routines you want to call in the .cc file:

void Tutorial::Finish()
{
    cout << "Finished processing events, total #events: " << h_nClusters->GetEntries() << endl;
    cout << "Integrated amount of found clusters in total: " << h_nClusters->Integral() << endl;
}

How do I know if my event is real data or just MC?

In general from the Physics class point of view, there's not really a difference between measured data and generated MC data. At least if there's MC data which was run with Geant, all the same structures in event.Reconstructed() are filled. This means for you that you don't have to differentiate between MC and data. Of course there are differences, like timing peaks are often at zero, you only have one prompt Tagger hit, the Trigger needs to be simulated of you need it, and so on. But sometimes you might want to do certain things especially if you're processing MC events, or you only want to do something with real data. The best method in Ant to figure out if the current event is MC or data is the following function call:

event.Reconstructed().ID.isSet(TID::Flags_t::MC);

This might look a bit complicated. But for you this just means you can do something in your ProcessEvent() method like the following:

const auto& data = event.Reconstructed();
const bool MC = data.ID.isSet(TID::Flags_t::MC);

This stores a reference to the reconstructed data structure in your TEvent and stores a boolean called MC which is true in case of MC data, and false for real data. Then you can do something like

if (!MC) {
    data.Candidates.count();
}

to get for example the number of reconstructed candidates (matched clusters) if you're not looking at MC files. The whole if condition doesn't make so much sense on its own, but you get the idea on how to check for certain things only on MC or on data.

Writing ROOT Trees the Ant Way

Working with ROOT's TTree class is more complicated than necessary and error-prone. That's why Ant provides you with a wrapper class for TTrees called WrapTTree. Using WrapTTree you no longer need to work with references passed to TTree::Branch, you can just define the branches, even for several trees, in your header file, and the best part is: you can just include the header file where you defined your branches to work with your created tree without declaring variables, setting branch addresses, update references and such.

So how do we use it? Again, we first need to include the header file for WrapTTree to our Physics header file:

#include "base/WrapTTree.h"

Then in the public: part of the class definition in the header we create a struct for the tree we want to create by deriving it from the WrapTTree class:

class Tutorial : public Physics {
public:

    struct tree_t : WrapTTree {
        ADD_BRANCH_T(double,   TaggW)
        ADD_BRANCH_T(unsigned, nClusters)
    };

The ADD_BRANCH_T is a macro which registers the branch for you, just specify the data type you want to store, together with a name for the branch. You can store more complex objects as well like TLorentzVector or even containers with non-default constructors like ADD_BRANCH_T(std::vector<double>, Numbers, 3) which defines a vector of doubles with a size of 3. The structure for the tree is now created, but ideally we want to access it everywhere in our class, so we additionally declare it in the private: section of our header file along with the histogram declarations:

tree_t t;

That's all we need to do in the header file. In the .cc file we create the tree. To do this we go to the constructor and use an old acquaintance from above, the HistFac:

t.CreateBranches(HistFac.makeTTree("tree"));

The function call we're performing is mostly self-explanatory: We call the method CreateBranches() from WrapTTree which expects a TTree object to know where to create the branches. We can let the HistogramFactory called HistFac within Physics classes handle this by calling its method makeTTree(). This returns a TTree with the name tree in this case, which means in the file created later there will be a TTree stored with the name tree next to the histograms we created using HistFac.

Filling the tree is now an easy task. We declared the tree with the name t, so if we want to put a value in the tree for nClusters for example, we can do this by writing t.nClusters = 3. Most likely we will do something like filling information to the tree every Tagger hit including the weight discusses earlier which is just the ratio between the prompt and random windows. Let's assume we want to extend our example from before by writing the weight together with the number of clusters to the tree (as defined in the header) and draw this in the end. All you have to do is extend the for loop over the Tagger hits at the end with the following lines:

        t.TaggW = promptrandom.FillWeight();
        t.nClusters = event.Reconstructed().Clusters.size();
        t.Tree->Fill();

We store the weighting to do our prompt random subtraction together with the number of clusters in the tree for each Tagger hit. Important is the t.Tree->Fill(); which writes all currently assigned values of the different branches to the tree. Without calling Fill() nothing what you've assigned to branches will be stored since the last call of Fill(). Very important in this regard is the fact that you can still access the current values by using t.TaggW for example without calling promptrandom.FillWeight() again. This is true for all branches and their assigned values. The call to Fill() just writes the values to the tree, but doesn't reset the values. This can be convenient for you if you need those values several times at a later stage in your analysis. But it can be a pitfall as well if you're not expecting this behaviour, calling t.Tree->Fill() several times without changing some of the values which stores the old values again and again for every tree entry.

After pointing out how to properly store values to the tree, let's cover how to draw a branch. Remember ShowResult() and the ant::canvas() wrapper? There we used the streaming operator << to pass histograms to the canvas to be drawn. We can utilize this for tree branches as well using something called TTree_drawable():

void Tutorial::ShowResult()
{
    ant::canvas(GetName()+": Basic plots")
            << h_nClusters
            << h_nClusters_pr
            << TTree_drawable(t.Tree, "nClusters >> (20,0,20)", "TaggW")
            << endc;
}

You tell this method which tree to use (t.Tree in our case), the name of the branch together with the binning you want to have (20 bins from 0 to 20, the usual way it's used in ROOT), and an optional weighting (the branch TaggW is used in this case since we used it to store the prompt random ratio, this way the prompt random subtraction is performed again automatically for you).

At this stage we arrived at a point were everything you need to perform an easy analysis including writing ROOT trees is covered and you can start programming your own analysis. Or you can continue reading this guide to get more in-depth information on several other aspects of Ant and how to simplify certain parts of your analysis. Nearly everything code-related what was mentioned so far is part of the Tutorial Physics class in analysis/physics/misc together with comments for every important part of the code.

Reading the TTree in again using Ant

Ant's WrapTTree does not only simplify the writing of trees, but reading them in as well. We created the TTree using the method CreateBranches(). Similar there's a method called LinkBranches() which can be used if you include the header file containing the definition of your tree. This way Ant takes care of setting up the branches and facilitates looping over the entries without handling references or addresses of branches. An example code snippet on how to work with the tree is the following, using WrapTFile, another wrapper for files provided by Ant:

#include "Tutorial.h"
#include "base/WrapTFile.h"

...

WrapTFileInput input("path/to/filename");
tree_t tree;

if (!input.GetObject("tree", tree.Tree))
    throw Exception("Can't find the tree!");

tree.LinkBranches();
const long long int max_entries = tree.Tree->GetEntries();
long long entry = 0;

while (entry < max_entries) {
    tree.Tree->GetEntry(entry++);
    someTH1->Fill(tree.Tree.nClusters, tree.Tree.TaggW);
}

...

It's worth mentioning here that WrapTTree even supports branches created with "branchname[sizebranch]" via WrapTTree::ROOTArray<T>, which wraps it into an conviniently usable std::vector<T>. Note as well that if you want to access a primitive data type like bool, int, and so on you can just access the data member by calling it directly. In case of more complex objects like TLorentzVector, vector<double> and such you need to append () to perform a call to this data member. This means accessing a TLorentzVector named "particle" you have to access it with tree.Tree.particle().E() to call the method for getting its energy or tree.Tree.photons().at(1) to get the second entry of a vector of some arbitrary type with the name "photons".

Accessing Particle / Detector / Scaler Information

Now that we've covered the basics, let's have a closer look at the TEventData structure and the information stored in it. As mentioned earlier basically all the interesting information is stored in a structure called TEvent. Within this structure, there are certain TEventData structures which store everything from raw detector information to reconstructed cluster hits to matched candidates and particles, differing slightly for MC or data.

Get Detector Information

The most fundamental thing you can access is the raw information named TDetectorReadHit. It stores the detector type (like Tagger, CB, PID, or Raw), the channel type (Timing, Integral), and the channel number along with its values which can be uncalibrated (just the converted value, more useful in case of calibrations) or calibrated. Even the read RawData is stored. All detector hits associated to an event are stored in a vector of TDetectorReadHits called DetectorReadHits inside the TEventData.

The raw or later already converted values with calibrations applied are futher processed, in case of calorimeters like the Crystal Ball or TAPS by the clustering algorithm to group individual channels or cluster hits named TClusterHit to a cluster TCluster. This happens by assigning weights to the different hits and then grouping them based on their weight starting from the highest energetic hit. The TCluster stores information like the time and total energy of the cluster, as well as a list (vector) of all grouped TClusterHits for this cluster named Hits. The reconstructed clusters from the clustering algorithm are stored in Clusters in the TEventData, which is basically a vector of pointers to the clusters (called TClusterList). So to access those information, you can get the number of reconstructed clusters by calculating the size of the cluster list:

event.Reconstructed().Clusters.size()

This is done in the Tutorial class. If you want to see the general distribution of cluster sizes itself but just for clusters in the CB or TAPS, you can do something like this:

for (const auto& cluster : event.Reconstructed().Clusters)
    if (cluster.DetectorType & Detector_t::Any_t::Calo)
        hist->Fill(cluster.Hits.size());

Within the Detector_t stucture you can see the various types (Type_t) of detectors you can check for, or their combinations in the nested structure Any_t which is used above.

Access already matched Clusters, aka Candidates

Ant comes with a few methods in its CandidateBuilder class to match the individual reconstructed clusters to candidates. A candidate in Ant is either a cluster by its own like a calorimeter hit from a neutral particle, or a cluster in the calorimeter with a matching hit in the PID or the TAPS Vetos. A candidate, implemented in the TCandidate struct, holds those clusters we already accessed in the example above, but also offers the information you usually want to work with like CaloEnergy (the energy of the cluster in the CB or TAPS), the size (number of crystals) and in which detector it was detected, information like Theta, Phi, and Time, as well as the matched VetoEnergy. In Ant both the PID and the Vetos are called Veto on this level. So depending on the Detector information stored in the candidate, Veto either means PID or TAPS Veto, the same as Calo for calorimeter is the CB or TAPS.

Candidates are stored in the TEventData in a variable called Candidates, which means you can access them by calling event.Reconstructed().Candidates.

Now that the structure is discussed, let's have a look at some examples on how to use candidates in your analysis. The following code snippet illustrates how to have a look at candidates in CB only and plot some information of those candidates:

const auto& cands = event.Reconstructed().Candidates;
hist_nCands->Fill(cands.size());  // number of candidates, or multiplicity

double EsumCB = 0;  // CB Energy Sum
double EsumPID = 0;  // PID Energy Sum
LorentzVec lv;  // lorentz vector to obtain the invariant mass of all combined neutral candidates
for (const auto& c : cands) {
    if (c.Detector & Detector_t::Type_t::CB)
        EsumCB += c.CaloEnergy;
    if (c.Detector & Detector_t::Any_t::CB_Apparatus) {
        EsumPID += c.VetoEnergy;
        // Fill theta distribution of neutral particles in CB (PID energy less than 0.2 MeV)
        if (c.VetoEnergy < .2) {
            lv += LorentzVec(vec3(c), c.CaloEnergy);
            hist_thetaNeutralCB->Fill(std_ext::radian_to_degree(c.Theta));
        }
    }
}
hist_neutralIM->Fill(lv.M());
hist_EsumCB->Fill(EsumCB);
hist_EsumPID->Fill(EsumPID);

As photons can deposit a small amount of energy in the PID elements as well, the example above uses an arbitrary value of 0.2 MeV as a threshold. The exact value should depend on your calibration and the beam energy.

Another thing used in the example above is LorentzVec, a faster implementation of TLorentzVector without all the ROOT overhead introduced by the extensive class hierarchy, and vec3 which is a simple and thus fast TVector3 implemenation with interfaces to this class. The call of LorentzVec() basically creates a TLorentzVector(0,0,0,0) and can be used the same way and even together with other TLorentzVector objects. The target for example can be constructed using

const auto target = LorentzVec({0,0,0}, ParticleTypeDatabase::Proton.Mass());

assuming we use a hydrogen target.

More complicated example

In the previous example we mainly checked energies related to certain detectors. The following example uses a slightly different approach and shows how to check more specific information related to the clusters of candidates and how to store candidates sorted by the calorimeter they're detected in to use them later; a complicated looking lambda expression included:

TCandidatePtrList candidates_CB;
TCandidatePtrList candidates_TAPS;

// this lambda expression is basically a function to calculate an effective cluster radius
// using the following formula:
// effR = \sqrt{\frac{\sum_{i=0}^{n} E_i (\Delta r_i)^2 }{ (\sum_{i=0}^{n} E_i) }}
//   where $\Delta r_i$ is the opening angle (in degrees) between the cluster direction
//   (using the central crystal axis) and the current crystal axis
auto effective_radius = [] (TCandidatePtr cand)
{
    auto cb = ExpConfig::Setup::GetDetector(Detector_t::Type_t::CB);  // Pointer to the CB detector
    TClusterHitList crystals = cand->FindCaloCluster()->Hits;  // CaloCluster in the candidate is the CB, get all hits in the cluster --> crystals
    // this method doesn't make sense for clusters with only one or two contributing crystals
    if (crystals.size() < 3)
        return std_ext::NaN;
    double effR = 0, e = 0;
    // get a (x,y,z) vector to the CB position of the central crystal of this candidate
    vec3 central = cb->GetPosition(cand->FindCaloCluster()->CentralElement);

    // loop over all crystals
    for (TClusterHit crystal : crystals) {
        // r is the angle (distance) between the current crystal and the center crystal of the cluster
        const double r = std_ext::radian_to_degree(central.Angle(cb->GetPosition(crystal.Channel)));
        effR += r*r*crystal.Energy;
        e += crystal.Energy;
    }
    effR /= e;  // normalize energy weighted distance of the crystals to the total energy
    return sqrt(effR);  // return the square root
};

for (const auto& c : event.Reconstructed().Candidates.get_iter()) {
    // store CB and TAPS candidates to use them later
    if (c->Detector & Detector_t::Type_t::CB)
        c_CB.emplace_back(c);
    else if (c->Detector & Detector_t::Type_t::TAPS)
        c_TAPS.emplace_back(c);

    // check some specific cluster information

    // here we first check if a candidate was detected in the CB
    // and then if it has deposited energy in the PID
    // if this is the case, we store the PID channel number and the theta value
    if (c->Detector & Detector_t::Any_t::CB_Apparatus)
        if (c->VetoEnergy)
            hist_theta_vs_PIDchannel->Fill(c->FindVetoCluster()->CentralElement,
                                           std_ext::radian_to_degree(c->Theta));

    // let's try something more elaborate: here we calculate the "effective radius"
    // of a cluster to quantify the shower shape of a cluster (basically the spread
    // of the cluster, if most of its energy is deposited in the center, typically
    // hadronic showers, or if the cluster is bigger with a larger cone, typically
    // electromagnetic showers)
    if (c->Detector & Detector_t::Any_t::CB_Apparatus) {
        // calculate the effective radius and check that it's finite
        double effR = effective_radius(c);
        if (isfinite(effR))
            hist_effR_vs_clusterE->Fill(c->FindCaloCluster()->Energy, effR);
    }
}

// plot the multiplicities in CB and TAPS
h_multCB->Fill(candidates_CB.size());
h_multTAPS->Fill(candidates_TAPS.size());

If you're interested in using this effective radius or something else called lateral moment, you can use this in a simple way by including ClusterTools:

/* header file */

#include "analysis/utils/ClusterTools.h"

utils::ClusterTools clustertools;


/* .cc file */

...

for (const auto& c : event.Reconstructed().Candidates.get_iter()) {
    clustertools.EffectiveRadius(*(c->FindCaloCluster()));
    clustertools.LateralMoment(*(c->FindCaloCluster()));
}

In this case the effective radius used above is defined slightly different by using an energy-weighted cluster center instead of the central crystal.

Particles in Ant

By default Ant does not provide particles constructed from the existing detector information. In other frameworks you may get particles, but never know exactly why this is a photon or a proton, or if this identification really works in your case or is suitable for the decay you're analysing. In general it's not too hard doing this by yourself. Having the candidates from earlier, you can just check for the VetoEnergy and decide if the particle is charged or not. And if it's charged, if it is an electron, a pion, or a proton, or if it's something else is something you could do probably better compared to some easy generic way where you do not have full control. If you want to work with particles instead of candidates, you can create a list of photons like this:

TParticleList photons;
for (auto c : cands.get_iter()) {
    if (c->VetoEnergy < .2)
        photons.emplace_back(std::make_shared<TParticle>(ParticleTypeDatabase::Photon, c));
}

TParticleList is a vector of TParticlePtr, a pointer to a TParticle. In the example all particles with less than 0.2MeV energy deposited in the PID or the TAPS Vetos are stored as a TParticlePtr in photons. Since this is a std::vector you can easily loop over the entries or work with it like an array.

In case of protons you might want to do a Pulse Shape Analysis, or PSA. For this you usually create a 2-dimensional histogram and plot the PSA radius and angle to see seperations between different particle families like photons or protons and neutrons. Ant implements methods in TCluster to calculate those values. This means you can easily access them while looping over your candidates:

for (auto cand : event.Reconstructed().Candidates.get_iter()) {
    PSAangle  = cand->FindCaloCluster()->GetPSAAngle();
    PSAradius = cand->FindCaloCluster()->GetPSARadius();
}

Since the ShortEnergy of a cluster is used for this, doing a PSA only makes sense in TAPS where the BaF2 crystals have a short and a long gate for the light componenst within the crystals.

Example using Proton Photon Combinatorics

One thing you might want to do in your analysis is determining the proton in your final state. You can do this for example by checking kinematic properties of your specific decay channel of interest and/or selecting the possible proton candidate from a dEvE plot by plotting the energy deposited in the vetos (PID or TAPS Veto) against the energy deposit in the calorimeter; this plot is often called banana plot in A2 since the shape of the proton band looks due to its curvature similar to a banana. Depending on how you want to select it, you might want to test different combinations of your candidates by e.g. looping over your candidates and testing each candidate as the possible proton while all the other candidates are treated as photons. If this is the case, then utils/ProtonPhotonCombs.h ist what you're looking for. You can create an object of this struct and use this to easily iterate over the aforementioned combinations, even applying filters (or cuts) on the combinations to discard combinations which do not fulfill kinematic properties the desired combination can have.

Let's have a look at how to use it. In the following example a few short definitions using using are used as an alias for longer expressions since programmers are lazy. In the header file we have to include the following:

#include "analysis/utils/ProtonPhotonCombs.h"

...

using particle_comb_t = utils::ProtonPhotonCombs::comb_t;
using particle_combs_t = utils::ProtonPhotonCombs::Combinations_t;

The alias particle_comb(s)_t will be used for the combination(s) returned by ProtonPhotonCombs. It then can be used in the cc file inside ProcessEvent() using some example selection criteria and looping over the combs_t like this:

utils::ProtonPhotonCombs proton_photons(event.Reconstructed().Candidates);

for (const TTaggerHit& taggerhit : data.TaggerHits) {
    // the already discussed promptrandom check
    promptrandom.SetTaggerTime(triggersimu.GetCorrectedTaggerTime(taggerhit));
    if (promptrandom.State() == PromptRandom::Case::Outside)
        continue;

    // We create the combinations which are stored in selection.
    // We will also apply some filters here to illustrate how to use them.
    // With Observe() you can fill a histogram with entries after each filter step to monitor
    // what the efficiency is using a lambda expression and capturing the histogram used for filling
    // (which has to be defined in the constructor of your class, e.g.
    //   hist = HistFac.makeTH1D(" Accepted Events", "step", "#", BinSettings(10), "steps");)
    particle_combs_t selection = proton_photons()
            .Observe([hist] (const std::string& s) { hist->Fill(s.c_str(), 1.); }, "[S] ")
            // require 3 photons
            .FilterMult(3, settings.max_discarded_energy)
            // require the missing mass to be in a mass windows of 300 MeV around the proton mass
            .FilterMM(taggerhit, ParticleTypeDatabase::Proton.GetWindow(300).Round())
            // add a custom filter which checks the proton kinematics and only allows protons to have a theta angle less than 50°
            .FilterCustom([] (const particle_comb_t& p) {
                if (std_ext::radian_to_degree(p.Proton->Theta()) > 50)
                    return true;
                return false;
            }, "proton #vartheta")  // the name of this custom filter which will be used for filling in hist
            // as an example for another custom filter we require no more than 1 photon candidate with a veto energy of more than 0.2 MeV
            .FilterCustom([] (const particle_comb_t& p) {
                if (std::count_if(p.Photons.begin(), p.Photons.end(),
                                  [](TParticlePtr g){ return g->Candidate->VetoEnergy > .2; }) > 1)
                    return true;
                return false;
            }, "max 1 photon with vetoE>.2");

    // if our selection is empty, we continue to the next combination
    if (selection.empty())
        continue;

    // lover over the different combinations in the selection
    for (const auto& cand : selection) {
        hist_mm->Fill(cand.MissingMass, promptrandom.FillWeight());
        LorentzVec IM;
        for (const auto& p : cand.Photons)
            IM += *p;
        hist_im->Fill(IM.M());
        hist_protonTheta->Fill(std_ext::radian_to_degree(cand.Proton->Theta));
    }
}

The order of the two loops might not be very wise in this particular case, unless you want to check some things where the beam photon is needed. In the code snippet above only a few histograms are filled in order to illustrate how to access certain information from the combinations. Note that only values of the combinations are filled and hence accessible if the corresponding filter is used. So cand.MissingMass would return 0 if no MM filter would've been applied.

This routine might be very useful if you have an event signature with a proton and only photons or photons and electrons with their low mass and hence treatable as photons as a simplification and then want to determine which combination (basically which candidate is the proton) is the right one by using a kinematic fit. This will be explained later here.

Particles in case of MC

If you're handling MC data, the earlier statement about Ant not providing Particles is not entirely true anymore. There are different ways of how to process MC files. You can just input generated particle events from Pluto, you can only include simulated Geant events (from the MC generator of your choice), or you can do both together. The default Ant way of analysing MC data would be to specify two input files, the Pluto and Geant file. If you do this, Ant matches the events from both files and provides you not only the always mentioned event.Reconstructed() structure, but fills the event.MCTrue() structure as well. And since Pluto and its PParticle class stores mother and daughter information for the individual particles, Ant makes use of this and creates a particle tree from the provided information in the Pluto file. This means the decay tree, including the unstable intermediate particles from Pluto if saved, is restored and can be traversed and used in your analysis. That's the biggest difference between the usual Reconstructed() information and MC.

The mentioned ParticleTree of the type TParticleTree_t is a tree of TParticlePtrs and can be accessed by calling event.MCTrue().ParticleTree. This by itself might be a bit cumbersome to use, but Ant gives you a hand in handling this. For example one use case might be that you have a file containing lots of decay channels and you want to seperate between them or track how many events of which decay you have seen. This might be the case if you used Ant-cocktail. There are methods implemented in analysis/utils/ParticleTools.h to build the production or decay string from these particle trees, the decay string can be obtained calling utils::ParticleTools::GetDecayString(event.MCTrue().ParticleTree). A use case might be this:

#include "analysis/utils/ParticleTools.h"

...

// example filling all seen MC decay channels:
hist_channels->Fill(utils::ParticleTools::GetDecayString(event.MCTrue().ParticleTree).c_str(), 1);

// store the production and decay channel in a string
// sanitize the strings by replacing characters like different brackets, colons, hashtags etc. by an underscore
string production = utils::ParticleTools::GetProductionChannelString(event.MCTrue().ParticleTree);
string decaystring = utils::ParticleTools::GetDecayString(event.MCTrue().ParticleTree);
decaystring = utils::ParticleTools::SanitizeDecayString(decaystring);

The method SanitizeDecayString can be used with other strings as well, not only the decay string.

You can also compare particle trees if they're the same. This is done by calling the IsEqual method of TParticleTree_t objects. Let's say you have two of those objects, tree1 and tree2, then you can check if they're the same by calling tree1->IsEqual(tree2, utils::ParticleTools::MatchByParticleName).

Since the particle tree is known in MC, you can also request certain particles or a list of all particles or all particles of the same type and work with this:

// we only want to work on MC, since on data there's no sich thing as ParticleTree available
if (!event.Reconstructed().ID.isSet(TID::Flags_t::MC))
    return;

const auto& particletree = event.MCTrue().ParticleTree;
if (particletree) {
    auto mctrue_particles = utils::ParticleTypeList::Make(data.ParticleTree);
    auto particles = mctrue_particles.GetAll();
    // count the number of electrons and protons in our MC event
    auto nCharged = std::count_if(particles.begin(), particles.end(), [] (TParticlePtr p) {
                      return p->Type() == ParticleTypeDatabase::eCharged; });

    // find certain particles in the tree
    const auto etaprime = utils::ParticleTools::FindParticle(ParticleTypeDatabase::EtaPrime, particletree);
    const auto pi0 = utils::ParticleTools::FindParticle(ParticleTypeDatabase::Pi0, particletree);  // will return the first found pi0 if there are more in the decay
    const auto photons = utils::ParticleTools::FindParticles(ParticleTypeDatabase::Photon, particletree);

    // select particles by hand, might be useful to have more flexibility or add special criteria
    TParticleList gammas;
    for (auto p : particles)
        if (p->Type() == ParticleTypeDatabase::Photon)
            gammas.push_back(p);
} else
    LOG_N_TIMES(1, WARNING) << "No particle tree found, only Geant or Pluto file provided, not both";

A list of all known and used particle types can be found in base/ParticleType.h.

Slowcontrol and Scaler Reads

One important object within TEventData not mentioned yet is TSlowControl, or rather a vector of those called SlowControls. Slowcontrols in Ant handle Scaler reads and other information not changing on a per-event basis, like Tagging Efficiencies. They live in the slowcontrol directory and are implemented as variables or processors. They are defined in the files SlowControlProcessors.h and SlowControlVariables.h respectively. The implementation itself of those objects can be found in the corresponding subdirectories.

If you want to work with Scaler reads or other Slowcontrol data in your analysis, you need to include the file slowcontrol/SlowControlVariables.h in your physics class. Since Scaler reads and hence other values based on them do not change each event, they are handled a bit differently compared to what was discussed so far. Let's assume you want to do something like a cross section measurement and need the livetime of the experiment. Or you want to check the Ladder/P2 ratio. To tell Ant which slowcontrols you want to monitor and which values you want to get, you request this in the constructor of your physics class:

PhysicsClass::PhysicsClass(const string& name, OptionsPtr opts):
    Physics(name, opts)
{
    slowcontrol::Variables::TaggerScalers->Request();
    slowcontrol::Variables::Beam->Request();
    slowcontrol::Variables::Trigger->Request();
}

Now that Ant is aware of the slowcontrol values we need, we can utilize this in our ProcessEvent method:

void PhysicsClass::ProcessEvent(const TEvent&, manager_t&)
{
...

    double taggerOr, p2, livetime;

    // only process slowcontrol data if we have a new scaler read
    if (slowcontrol::Variables::TaggerScalers->HasChanged())
    {
        taggerOr = slowcontrol::Variables::TaggerScalers->GetTaggerOr();
        p2       = slowcontrol::Variables::Beam->GetIonChamber();
        livetime = slowcontrol::Variables::Trigger->GetExpLivetime();
        // show Ladder/P2 info in the terminal
        LOG(INFO) << " ladder/p2 = "
                  << taggerOr << " / " << p2
                  << " = " << taggerOr / p2;
    }

Tagging Efficiencies can be used in a similar fasion. In this case you need to request the TaggEff variable:

slowcontrol::Variables::TaggEff->Request();

And then again in your ProcessEvent you obtain the Tagging Efficiency for the current Tagger channel like this:

void PhysicsClass::ProcessEvent(const TEvent&, manager_t&)
{
...

    for (const auto& taggerHit: data.TaggerHits) {
        // promptrandom check as discussed before
        promptrandom.SetTaggerTime(triggersimu.GetCorrectedTaggerTime(taggerHit));
        if (promptrandom.State() == PromptRandom::Case::Outside)
            continue;

        double TaggEff, TaggEffErr;
        {
            const auto taggEff = slowcontrol::Variables::TaggEff->Get(taggerHit.Channel);
            TaggEff            = taggEff.Value;
            TaggEffErr         = taggEff.Error;
        }
        // work with the TaggEff values
    }

For more examples you might want to have a look at the debugScaler or debugTaggEff class.

Kinematic Fitting

The technique of kinematic fitting is a mathematical procedure based on a Least Means Squared minimization using Lagrange multipliers with the goal of improving the resolution of experimental measurements and to test a hypothesis. It utilizes the physical laws of particle interactions or decays which are passed to the kinematic fitter in form of constraints.

In Ant kinematic fitting is realised using APLCONpp which you might know already from the installation. APLCON++ is a C++11 wrapper for the APLCON fitter by Volker Blobel. The base class for the different fitting methods within Ant is Fitter in analysis/utils/fitter. You will also need uncertainties for the variables the kinematic fitter should improve. They're based on the experimental resolution of reconstructed events and basically tell the fitter how much a measured variable is allowed to change to fulfill the constraints. All uncertainties provided by Ant can be found in the analysis/utils/uncertainties directory.

How do I obtain my own uncertainties?

The uncertainties for the kinematic fitting are usually dependent on energy and theta. While there are ways to estimate the uncertainties based on those variables, another and probably better approach is to determine them for your beamtime since the incident beam energy, calibrations, or other settings might be different to other beamtimes. Therefore the physics class InterpolatedPulls and the tool Ant-makeSigmas exists to determine the uncertainties by comparing pull distributions for the decay π⁰π⁰ → 4γ for data and MC. This way you create root files interpolated_sigmas_{data,mc}.root for the physics_files directory of the calibration for your beamtime which in turn can be read in by the Interpolated uncertainty model for the kinematic fit.

Adding a Fitter to your physics class

In preparation for the physics class being able to use a kinematic fitter, you first have to include the relevant header files:

#include "analysis/utils/fitter/KinFitter.h"
#include "utils/uncertainties/Interpolated.h"

Here we're going to use the uncertainties created using the interpolated pull classes. Within the class definition in the header file of your physics class we declare a variable for the fitter:

class PhysicsClass : public Physics {

...

private:
    utils::KinFitter fitter;

...

}

The first thing we need to do to actually use the fitter is to create it. This is done in the constructor of your physics class:

PhysicsClass::PhysicsClass(const string& name, OptionsPtr opts) :
    Physics(name, opts),
    fitter(utils::UncertaintyModels::Interpolated::makeAndLoad(),
           true  // enable fit z vertex
           )
{
    ...

    fitter.SetZVertexSigma(3.0);

Here we created a fitter with the Interpolated uncertainty model, the first argument for the constructor of the KinFitter, and enabled the Z vertex in the fit which means the position of the interaction point within the target cell is not assumed to be in the center but can be shifted by the fitter. The uncertainty is then set to 3cm (which is usually used for a 10cm long target cell).

Now everything is prepared and we can start using the fit in our ProcessEvent(). You usually check the output of the kinematic fit for each Tagger hit since the beam energy is used in the fit. So place the call to run the fit within your Tagger loop.

double best_probability = std_ext::NaN;
for (const TTaggerHit& taggerhit : data.TaggerHits) {
    TParticlePtr proton;
    TParticleList photons;

    ...

    APLCON::Result_t fitresult = fitter.DoFit(taggerhit.PhotonEnergy, proton, photons);

    // check if the fit converged
    if (fitresult.Status != APLCON::Result_Status_t::Success)
        continue;

    // check if we found a better probability for this fit and copy it if true, continue otherwise
    if (!std_ext::copy_if_greater(best_probability, fitresult.Probability))
        continue;

    // retrieve the fitted photon and proton information as well as the number of iterations
    auto fitted_proton = fitter.GetFittedProton();
    auto fitted_photons = fitter.GetFittedPhotons();
    auto iterations = fitresult.NIterations;
}

It's easy as that. To run the fit you tell it the photon beam energy, give it a TParticlePtr to the proton and a TParticleList (a vector of TParticlePtr) to the photons. The constraints used are energy and momentum conservation (four constraints because of x, y, and z component of the momentum). For the proton the energy is set as unmeasured (uncertainty of 0) since protons usually are not stopped in the calorimeter and punch through.

One use case which was mentioned already in this guide is the use of the class ProtonPhotonCombs here to create proton and photon combinations of the current event and loop over the combinations, run the kinematic fit for each combination, and determine based on the probability which combination is the best, i.e. which particle is the proton.

Using different uncertainty models

In the example above we only covered the case for data. But you usually want to analyse MC as well. And it would be nice to dynamically switch between MC and data to use the same class for everything without creating independent fitters or even Physics classes. It was mentioned already how to determine if you're running MC or data here. So let's use this to dynamically switch the uncertainty model.

For this we first declare the two uncertainty models we're going to use in the header file, and add a method to modify some fit settings:

#include "analysis/utils/Uncertainties.h"

...

utils::UncertaintyModelPtr model_data;
utils::UncertaintyModelPtr model_MC;

...

public:

    static APLCON::Fit_Settings_t MakeFitSettings(unsigned);

Similar to the previous example we initialize the fitter and the models in the constructor by using the member initializer list:

APLCON::Fit_Settings_t PhysicsClass::MakeFitSettings(unsigned max_iterations)
{
    APLCON::Fit_Settings_t settings;
    settings.MaxIterations = max_iterations;
    return settings;
}

PhysicsClass::PhysicsClass(const string& name, OptionsPtr opts) :
    Physics(name, opts),
    model_data(utils::UncertaintyModels::Interpolated::makeAndLoad(
                   utils::UncertaintyModels::Interpolated::Type_t::Data,
                   // use Sergey as starting point
                   make_shared<utils::UncertaintyModels::FitterSergey>()
                   )),
    model_MC(utils::UncertaintyModels::Interpolated::makeAndLoad(
                 utils::UncertaintyModels::Interpolated::Type_t::MC,
                 // use Sergey as starting point
                 make_shared<utils::UncertaintyModels::FitterSergey>()
                 )),
    fitter(nullptr,  // set the uncertainty model later
           true,  // enable fit z vertex
           MakeFitSettings(20))  // set the maximum number of iterations to 20
{
    ...

    fitter.SetZVertexSigma(3.0);

Here we use again the Interpolated uncertainty model like before, but this time we specify the type (MC or data), and we base the uncertainties on the uncertainty model by Sergey Prakhov (implemented as FitterSergey for the 2014 EPT and the 2007 eta beamtimes). This is just used as an example, you can also use a different uncertainty model, add parameters for other beamtimes, just use the Interpolated uncertainties without some default model like in the previous example (pass nullptr or nothing at all, which assumes a nullptr), or implement your own uncertainties. The third argument you can pass to the kinematic fitter is a FitSettings_t struct. In the above example we just set the number of iterations. Have a look at the definition of it to see what values you can control if you want or need to.

Then we can set the model on an event basis like this:

void PhysicsClass::ProcessEvent(const TEvent& event, manager_t&)
{
    const bool MC = event.Reconstructed().ID.isSet(TID::Flags_t::MC);

...

    // set fitter uncertainty models
    {
        const auto& model = MC ? model_MC : model_data;
        fitter.SetUncertaintyModel(model);
        kinfit_freeZ.SetUncertaintyModel(model);
        treefitter_etap.SetUncertaintyModel(model);
    }

The other two fitters used were not initialized in the example above. You can not only use a default kinematic fit with just energy and momentum conservation, but add invariant mass constraints as well. In Ant we call this a tree fit because you basically provide a decay tree with particles at the tree nodes which the TreeFitter will use as invariant mass constraints.

Unconstrained Z vertex

The kinfit_freeZ should indicate that you can treat the Z vertex differently as well. So far we used a constrained Z vertex. Of course you can just use the kinematic fit without using the Z vertex position at all by setting the corresponding flag to false. But maybe you want to use the Z vertex position, but do not constrain it by setting its uncertainty to 0 (like for the proton energy discussed earlier). In this case the kinematic fit has more freedom to shift the vertex position which usually results in a broader distribution for the vertex position. Or in case of wrong hypotheses which might result in a larger discrepancy for the energy and momentum conservation the vertex position can get pushed outside of the target cell. Since the fitter has more freedom to shift it when the uncertainty is 0, this will be more likely to happen compared to the case were we set it to 3cm.

For a free Z vertex fit, the fitter will try to determine a good starting value for the vertex position based on the target length. For this you have to set it either via the Setup if the Setup_traits::target_properties_t GetTargetProperties() are specified for the Setup you use, or just by passing the length in cm within the constructor of your Physics class:

    // get target information
    const auto target = ExpConfig::Setup::Get().GetTargetProperties();

    // set sigma to 0 for unmeasured --> free z vertex
    kinfit_freeZ.SetZVertexSigma(0);
    kinfit_freeZ.SetTarget(target.length);  // double expected, target length in cm

Except this there's no difference to using a constrained Z vertex. The other mentioned fitting method provided by Ant is the tree fitting which will be discussed in more detail in the following section.

"Tree Fitting"

As mentioned before, a kinematic fit including invariant mass constraints is called a tree fit in Ant. The idea is that you provide a ParticleTypeTree, which is similar to a TParticleTree_t used in the MC section. The difference is that no TParticlePtr are used in the tree, but references to ParticleTypeDatabase::Types which contain the particle properties like the mass which will be used to constrain the IMs in the kinematic fit.

Let's use as an example the reaction π⁰η → 4γ which we declare in the header file as treefitter_pi0eta.

PhysicsClass::PhysicsClass(const string& name, OptionsPtr opts) :
    Physics(name, opts),
    treefitter_pi0eta(ParticleTypeTreeDatabase::Get(ParticleTypeTreeDatabase::Channel::Pi0Eta_4g),
                      nullptr, true, {},  // pass a nullptr as uncertainty model (set it later) and activate the Z vertex
                      MakeFitSettings(10)
                      ),
    ...

This way we pass the predefined reaction channel to the kinematic fitter to set up the IM nodes for the pion and the eta. The uncertainty model is again left blank to set it later and the Z vertex in the fit is enabled. The curly brackets just pass an empty node setup, which can give you a more fine-grained control over the individual nodes (it's defined in the TreeFitter.h, have a look there for more details on how it works). And we limit the maximum number of iterations to 10 similar to the previous example (the method definition itself is not included in this example).

You can also set some iteration filters for the treefit in the constructor, for example to limit the IM range for the nodes:

<Constructor like usual>
{
    treefitter_pi0eta.SetZVertexSigma(3.0);

    {
        auto pi0 = treefitter_pi0eta.GetTreeNode(ParticleTypeDatabase::Pi0);
        auto eta = treefitter_pi0eta.GetTreeNode(ParticleTypeDatabase::Eta);

        // pass a lambda as a filter for the fitter iterations
        treefitter_pi0eta.SetIterationFilter([pi0,eta] () {
            const auto& pi0_lvsum = pi0->Get().LVSum;  // get the pi0 IM
            const auto& eta_lvsum = eta->Get().LVSum;  // get the eta IM

            const auto& pi0_cut = ParticleTypeDatabase::Pi0.GetWindow(90);  // define a 90MeV window around the pi0 mass
            const auto& eta_cut = ParticleTypeDatabase::Eta.GetWindow(200);  // define a 200MeV windows around the eta mass

            // require the pi0 and eta IMs of the pi0 and eta node of the fit to be within the defined mass windows
            return pi0_cut.Contains(pi0_lvsum.M()) && eta_cut.Contains(eta_lvsum.M());
        });
    }

If you want to use a decay with the same particles in it, the above example does not apply. In case of the π⁰π⁰ → 4γ reaction the code to limit the IM of the two pions could look like this:

...

    auto pi0s = treefitter_pi0pi0.GetTreeNodes(ParticleTypeDatabase::Pi0);
    treefitter_Pi0Pi0.SetIterationFilter([pi0s] () {
        auto lvsum1 = pi0s.front()->Get().LVSum;
        auto lvsum2 = pi0s.back()->Get().LVSum;

        const auto& pi0_cut = ParticleTypeDatabase::Pi0.GetWindow(90);

...

In contrast to the kinematic fit where you just call the DoFit() method since you just have one fit to perform, for a treefit there are usually several combinations which need to be fitted. That's why the structure to run the fit(s) looks a bit different:

    double pi0eta_fit_prob = std_ext::NaN;

    // you need some proton and photons like in the beginning of the fitting section of this guide,
    // either determined by some selection criteria or by using for example the ProtonPhotonCombs
    // assume for this example we created combinations stored in combs
    for (const auto& p : combs) {

        APLCON::Result_t res;

        treefitter_pi0eta.PrepareFits(TaggerHit.PhotonEnergy, p.Proton, p.Photons);

        // loop over the treefits as long as there's another fit to perform
        while (treefitter_pi0eta.NextFit(res)) {
            // check if the fit converged
            if (res.Status != APLCON::Result_Status_t::Success)
                continue;
            // check if we found a better probability
            if (!std_ext::copy_if_greater(pi0eta_fit_prob, res.Probability))
                continue;

            // found fit with better probability, do something with it
            int iterations = res.NIterations;

            const auto& fitter = treefitter_pi0eta;
            double fitted_z_vertex = fitter.GetFittedZVertex();
            ...
        }
    }

The outer loop in the above examples is just there to indicate how it would look like if you use the ProtonPhotonCombs way which has been already explained.

Define your own ParticleTypeTree

It might be useful to define your own (maybe simplified) particle tree if you don't want to constrain everything or the predefined trees do not meet you needs. In this case you can just create your own tree which will be shown in the following example. Let's create a ParticleTypeTree of an eta' decaying into 2 photons because we want to learn how to do it.

// create a "base tree" which is basically the photoproduction including the recoil proton
ParticleTypeTree base_tree()
{
    ParticleTypeTree t = Tree<typename ParticleTypeTree::element_type::type>::MakeNode(ParticleTypeDatabase::BeamProton);
    t->CreateDaughter(ParticleTypeDatabase::Proton);
    return t;
}

// create a eta' which decays into 2 photons based on the basic reaction defined above
ParticleTypeTree etap_2g()
{
    auto t = base_tree();
    auto etap = t->CreateDaughter(ParticleTypeDatabase::EtaPrime);
    etap->CreateDaughter(ParticleTypeDatabase::Photon);
    etap->CreateDaughter(ParticleTypeDatabase::Photon);
    return t;
}

This is just an example to illustrate how you can construct your own ParticleTypeTree as an input for tree fitting. This decay as well as many others is already implemented in Ant. This one in particular can be obtained by calling ParticleTypeTreeDatabase::Get(ParticleTypeTreeDatabase::Channel::EtaPrime_2g. A list of all available channels can be seen in the header file of ParticleTypeTree.

Conclusion

Congratulations if you made it to this point! Now you learned a lot about how Ant works and how to write an analysis. There's a lot more in Ant to discover which is not covered by this guide. Explaining everything what's useful would go way beyond the scope of this guide. Feel free to explore the different physics classes already existing or the other classes and functions implemented in this framework. Here are just a few things worth mentioning:

  • There's a tool called Plotter in Ant which can loop over trees created in your analysis and create some nice plots without running on raw data files again and again. The idea is to derive your own plotting class from it (e.g. struct MyPhysics_plot : Plotter { ...), register it like you do it with a Physics class by calling the macro AUTO_REGISTER_PLOTTER(MyPhysics_plot) and then run it with Ant-plot via the -p flag and pass it the name of your Plotter. The intended way of using it is to use the WrapTTree definition from your header file that your Plotter knows the structure of your tree and iterates over it. You need to provide some struct with definitions for the histograms you would like to have, how they should be filled, some cuts you might want to apply, and maybe if you want to include some kind of switch between data and MC or even individual reaction channels. For this the class CutTree is used which creates a combinatorical tree of the provided cuts and fills the histograms at each node. A fairly simple example including a lot of things explained so far is the class PIDEfficiencyCheck which includes a Plotter class. Either build on top of this, or if you had no problems understanding it and want to do more fancy stuff you might want to have a look at other classes in analysis/physics/* like etaprime/EtapDalitz_plot, etaprime/EtapOmegaG_plot, omega/Omega_EtaG, production/triplePi0Plots, or some other class which includes the header files physics/Plotter.h and analysis/plot/CutTree.h, and calls the macro AUTO_REGISTER_PLOTTER.
  • In analysis/utils you find a lot of things which are useful for an analysis (surprise!), one tool is the Matcher which can be used to match simulated particles, i.e. the generated true particles, to the reconstructed particles simulated by Geant. You can use it by calling its utils::match1to1() function, provide it with a list of true and a list of reconstructed particles, add a lambda as a matching function to obtain some quantity which can be used to match the particles (like the angle between the two: [] (const TParticlePtr& p1, const TCandidatePtr& p2) { return p1->Angle(*p2); }, and give it an interval in which range particles match (like {0, std_ext::degree_to_radian(20)}).
  • There are a lot of calibrations available in Ant, for this you should have a look at the Calibrations Guide. And there are tools to obtain smearings for MC energies and obtain uncertainties which might be included or at least referenced in the guide as well.
  • If you find something interesting and don't really understand how it works or how to use it, chances are there's a test for it (briefly explained here). Most of the tests written for Ant are relatively easy and show you how to use certain classes. Or use the really nice debugger when running your code in QtCreator to learn what happens when the code is executed.

There are many more things which most likely have been forgotten to include in this guide or the previous list. Feel free to fix mistakes, extend existing or add new topics, rewrite things you did not understand, or add links to newly created wiki pages which have not been there at the time of writing of this guide.