Skip to content

Tree-level completions of LNV operators for neutrino-mass model building

License

Notifications You must be signed in to change notification settings

johngarg/neutrinomass

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Exploding operators for Majorana neutrino masses and beyond 💣💥

This is the code accompanying the paper “Exploding operators for Majorana neutrino masses and beyond”. We don’t intend this to be a polished and general purpose implementation of the methods discussed in the paper, but rather an example of how the methods can be used. The code is not optimised for performance and contains many aspects specific to the problem tackled in the paper. The package can be installed through pip

pip install neutrinomass

Please get in touch if you are having trouble install or using the code.

The code is split into three main modules:

  1. tensormethod provides the field objects and effective operators.
  2. completions takes the effective operators generated by tensormethod and finds their tree-level UV completions with the methods discussed in the paper.
  3. database provides the filtered database of lepton-number violating models and functions for interacting with it.

Each module is described in brief detail below. There is also an examples directory that contains a tutorial for how to use the model dataframe object MVDF that we intend to be the main way to interact with the filtered database of models. The full database of unfiltered models can be accessed here.

tensormethod

This module adapts the sympy tensor and tensor index objects for the special case of SU(N) tensors representing fields transforming under the SM gauge group. (Since sympy version 1.3, the constructors of these objects have been made more user-friendly. The code here uses sympy version 1.2 but I plan on upgrading to the latest version of sympy soon.) Fields carry a label string as well as dynkin digits which define the irreducible representation. For example

>>> from sympy import Rational
>>> from neutrinomass.tensormethod import Field
>>> H = Field("H", "00001", charges={"y": Rational("1/2")})
>>> Q = Field("H", "10101", charges={"y": Rational("1/6"), "3b": 1})

represent an isodoublet scalar (the Higgs) and left-chiral fermion transforming in the fundamental of SU(3) and SU(2) (the left-handed quark doublet). The charges dictionary may in principle contain many U(1)s but hypercharge must be one (initialised to 0). Fields can be called on appropriately labelled indices. The indices can be passed in as strings according to the following rules:

  • The first character in the string is a minus sign for lowered indices
  • Apart from a minus sign, the first character in the string must be one u, d, c, i standing for undotted, dotted, colour and isospin.
  • The next element in the string is an identifier.

Contracted indices have a special representation (inherited from the sympy objects). You can do basic tensor algebra, and make operators from products of fields:

>>> from neutrinomass.tensormethod import eps
>>> h = H("i0")
>>> h
H(i0)
>>> q = Q("u0 c0 i1")
>>> q
Q(u0, c0, i1)
>>> h * q * eps("-i0 -i1")
H(I_0)*Q(u0, c0, I_1)*metric(-I_0, -I_1)
>>> (h * q * eps("-i0 -i1")).dynkin
10100

tensormethod also knows about Bose-Einstein and Fermi-Dirac statistics.

Invariants can be constructed explicitly, with certain indices optionally ignored (as we want in our case)

>>> from neutrinomass.tensormethod import L, H, invariants
>>> invariants(L, L, H, H)
[L(U_0, I_0)*L(U_1, I_1)*metric(-U_1, -U_0)*H(I_2)*metric(-I_0, -I_2)*H(I_3)*metric(-I_3, -I_1)]
>>> o1 = invariants(L, L, H, H, ignore=["u"])[0]
>>> o1
L(u0, I_0)*L(u1, I_1)*H(I_2)*metric(-I_0, -I_2)*H(I_3)*metric(-I_3, -I_1)
>>> o1.latex()
'L^{i} L^{j} H^{k} H^{l} \\epsilon_{i k} \\epsilon_{j l}'

Currently algorithms removing operators equivalent up to certain kinds of index relabellings are implemented only for the contraction of one index type at a time.

The module also contains results from the Hilbert Series up to dimension-11 in the ΔL = 2 SMEFT.

completions

The completions module contains the functionality for finding the tree-level completions of EffectiveOperator objects. These are constructed from tensormethod.operator objects very simply:

>>> from neutrinomass.completions import EffectiveOperator, operator_completions, clean_completions
>>> eff_o1 = EffectiveOperator("O_1", o1)

Models generating the Weinberg at tree-level can then be found with the completions and clean_completions functions

>>> seesaw_1, seesaw_2, seesaw_3 = clean_completions(operator_completions(eff_o1))

Each Completion object has an associated Lagrangian, which contains information about the lepton-number violating interaction terms, and can be called upon to generate the entire gauge and Lorentz invariant renormalisable interaction Lagrangian. The terms sufficient to generate the effective operator can be viewed through the Lagrangian object or already through terms attribute of the Completion object

>>> seesaw_2.terms
[L(U_0, I_0, g0_)*H(I_1)*ψ(U_1)*metric(-I_0, -I_1)*metric(-U_1, -U_0),
 L(U_0, I_0, g1_)*H(I_1)*ψ(U_1)*metric(-U_0, -U_1)*metric(-I_1, -I_0)]
>>> lag = seesaw_2.lagrangian
>>> lag.num_u1_symmetries()
2
>>> lag.generate_full()   # can be slow

You can look at a summary of the information relevant to a Completion by calling the info() method

>>> seesaw_2.info()
Fields:
ψ    F(1, 1, 0)(0)

Lagrangian:
L(U_0, I_0, g0_)*H(I_1)*ψ(U_1)*metric(-I_0, -I_1)*metric(-U_1, -U_0)
L(U_0, I_0, g1_)*H(I_1)*ψ(U_1)*metric(-U_0, -U_1)*metric(-I_1, -I_0)

Diagram:    # Should open in separate window

The diagram will be displayed inline if you are in a notebook, and the Lagrangian should be rendered in LaTeX.

The completions are found by filling in allowed topologies generated with FeynArts through Mathematica. Relatively recently an nice Python interface to Mathematica was released, which would make this bridge much nicer. Many topologies are already loaded in. Generation of new topologies happens with the generate_topologies script. FeynArts cares about much more information than we do, so perhaps it would be quicker to use a custom algorithm for generating the topologies, and the current code is slower than it should be.

The important files are

├── completions
│   ├── core.py
│   ├── completions.py
│   ├── operators.py
│   ├── topologies.py
│   ├── utils.py
│   ├── generatetopologies
│   └── wolfram
│       └── generatetopologies.wl
│   ├── topology_data
│   │   ├── deletedata
│   │   ├── diagrams
│   │   ├── graphs
│   │   └── partitions
│   ├── operators.p
│   ├── deriv_operators.p

database

├── database
    ├── __init__.py
    ├── closures.py
    ├── closures_test.py
    ├── database.dat
    ├── database.py
    ├── export.py
    ├── export_test.py
    ├── json_serialiser.py
    ├── operators.py
    └── utils.py

The files in the database module include

  • closures.py: Contains the automated procedure for generating the operator closure diagrams, estimating the neutrino-mass scale and new-physics scale associated with each operator.
  • database.py: Defines the ModelDataFrame class, which is the main entry point for interacting with the models. The neutrino-mass dataframe MVDF is also provided here, which is an instance of a ModelDataFrame that contains the database of filtered models. We intend this to be the most common way of interacting with the data. All of the raw data can be accessed from the mvdb repository.
  • export.py: Contains functions used to write completion export Completion objects to a text-based format.
  • operators.py: Contains functions relevant for making the main table of results in the paper.
  • pickledata: Script to generate the pickled files required to initialise MVDF, includes list of models that generate the Weinberg operator through heavy loops.

The examples directory provides a tutorial for working with the database and some examples of common kinds of queries. The queries are made on the neutrino mass dataframe object MVDF, which inherits from the pandas dataframe. For more information relevant to using the dataframe objects in pandas, the user guide is probably a good place to start.