LLAMA is a reliable and flexible multi-messenger astrophysics framework and search pipeline. It identifies astrophysical signals by combining observations from multiple types of astrophysical messengers and can either be run in online mode as a self-contained low-latency pipeline or in offline mode for post-hoc analyses. It is maintained by Stefan Countryman from this github repository; the Docker image can be found here. Some documentation on manually pushing the Conda environment is available here.
This image serves as a build environment for LLAMA code. It has an Anaconda python distribution with all LLAMA dependencies installed for user llama.