Skip to content

Low-Latency Algorithm for Multi-messenger Astrophysics (LLAMA) Conda-based build environment

Notifications You must be signed in to change notification settings

stefco/llama-env

Repository files navigation

Low-Latency Algorith for Multi-messenger Astrophysics (LLAMA) Conda build environment

Anaconda environment used Docker build status

LLAMA is a reliable and flexible multi-messenger astrophysics framework and search pipeline. It identifies astrophysical signals by combining observations from multiple types of astrophysical messengers and can either be run in online mode as a self-contained low-latency pipeline or in offline mode for post-hoc analyses. It is maintained by Stefan Countryman from this github repository; the Docker image can be found here. Some documentation on manually pushing the Conda environment is available here.

This image serves as a build environment for LLAMA code. It has an Anaconda python distribution with all LLAMA dependencies installed for user llama.

About

Low-Latency Algorithm for Multi-messenger Astrophysics (LLAMA) Conda-based build environment

Resources

Stars

Watchers

Forks

Packages

No packages published