Sirepo is fully open source, as are most of its codes. We are happy to support you, just submit an issue if you have questions.
We deploy using Docker.
If you use a Mac, read on. Otherwise, skip to PC Install. We use Macs so they are the best supported.
Once installed, run the server.
Once Vagrant is installed, run the vagrant-sirepo-dev installer on your Mac:
mkdir v cd v curl https://radia.run | vagrant_dev_no_nfs_src=1 bash -s vagrant-sirepo-dev vagrant ssh
The directory must be named "v", which will be used as a basis
for the hostname
v.radia.run. The rest of this page assumes
v.radia.run is the hostname.
vagrant_dev_no_nfs_src=1 turns off sharing
~/src between the
host (Mac) and guest (VM). This depends on how you develop. If you
would like to use an IDE like PyCharm, you might want to share
with the VM. This way you can edit files locally on your Mac. In this case,
you would use the command:
curl https://radia.run | bash -s vagrant-sirepo-dev
If you do this, you may want to have a symlink on your mac from
/Users/<your-user> so that you can directly
reference file names in error messages output by sirepo. Make sure
on your Mac is
The host defaults to
v.radia.run (ip 10.10.10.10). You can also
specify a different host as an argument to
curl https://radia.run | bash -s vagrant-sirepo-dev v3.radia.run
The host must be of the form
Next step: Single Server Execution.
You can develop on Windows or Linux with Vagrant. You just have to run the install manually.
Once you have installed VirtualBox and Vagrant, create a directory, and use this Vagrantfile:
# -*-ruby-*- Vagrant.configure("2") do |config| config.vm.box = "fedora/32-cloud-base" config.vm.hostname = "v.radia.run" config.vm.network "private_network", ip: "10.10.10.10" config.vm.provider "virtualbox" do |v| v.customize ["guestproperty", "set", :id, "/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold", 5000] # https://stackoverflow.com/a/36959857/3075806 v.customize ["setextradata", :id, "VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled", "0"] # If you see network restart or performance issues, try this: # https://github.com/mitchellh/vagrant/issues/8373 # v.customize ["modifyvm", :id, "--nictype1", "virtio"] # # Needed for compiling some the larger codes v.memory = 8192 v.cpus = 4 end config.ssh.forward_x11 = false # https://stackoverflow.com/a/33137719/3075806 # Undo mapping of hostname to 127.0.0.1 config.vm.provision "shell", inline: "sed -i '/127.0.0.1.*v.radia.run/d' /etc/hosts" end
Then install the vbguest plugin:
> vagrant plugin install vagrant-vbguest
This will make sure your time on the machine stays up to date, and also allow you to mount directories from the host. Once the plugin is installed, run:
> vagrant up
> vagrant ssh
And inside the guest VM run the redhat-base installer:
$ curl https://radia.run | bash -s redhat-dev $ exit
This sets up a lot of environment so logging out is a good idea, then login again and run the sirepo-dev installer:
$ vagrant ssh $ curl https://radia.run | bash -s sirepo-dev $ exit
This installs all the codes used by sirepo, and it's fully automatic so
go have lunch, and it will be done. Make sure you
exit, because you
will need to refresh your login environment.
Next step: Single Server Execution.
Simple Server (Runner) Execution
Once installed by one of the methods above, you will have a sirepo development environment. To run sirepo locally, run:
$ cd ~/src/radiasoft/sirepo $ sirepo service http
Vagrant sets up a private network. You can access the server at http://v.radia.run:8000. However, some networks block resolutions to private internet addresses so you may have to visit http://10.10.10.10:8000 (this is the case, for example, on Macs with no active internet connection).
Developing with SBatch and Docker
Job execution is handled by four process components:
sirepo service httpwhich receives messages from the GUI
sirepo job_supervisorwhich brokers messages between the server and agents
Command line job commands (
sirepo job_cmd), started by agents, which run the
templatefunctions in the execution environment.
The job supervisor environment can support executing codes in a local process, on Docker-enabled nodes, on Slurm clusters, and at NERSC (assuming the user has credentials for accessing NERSC).
We can configure three of these environments for development automatically.
The Flask server can be started manually for the local job driver by running:
bash etc/run-server.sh local
The Tornado job supervisor separately for the local job driver by running:
bash etc/run-supervisor.sh local
This is sufficient for a single node execution in development or in private network environment. Do not run the local driver in public environments or where security is a concern.
To enable Docker execution, you will need to install docker on your VM:
sudo su - -c 'radia_run redhat-docker'
This will require a reboot and a logout/login. Once you have Docker setup, you will start the job supervisor:
The server can be started with:
bash etc/run-server.sh docker
The job supervisor using the docker job driver:
bash etc/run-supervisor.sh docker
You can run Slurm jobs locally, too, but you need to install Slurm:
The server can be started with:
bash etc/run-server.sh sbatch
Start the supervisor:
bash etc/run-supervisor.sh sbatch
If you want to run sbatch on another node, you can specify that
configure it with (on a Mac or Linux), e.g. create a VM called
mkdir ~/v8 cd ~/v8 radia_run vagrant-sirepo-dev vssh radia_run slurm-dev
Then start the supervisor on
bash etc/run-supervisor.sh sbatch v8.radia.run
In order to run on cori.nersc.gov, you need to a socket open so that
Cori can reach the server. This can be accomplished through a reverse proxy or
socat running on a server with a public
You start the server basically the same way.
bash etc/run-server.sh nersc
Let's say the public IP address is
126.96.36.199 and the server is running
on port 8001 on your VM (
v.radia.run) on that public server.
socat which forward port 8001:
socat -d TCP-LISTEN:8001,fork,reuseaddr TCP:v.radia.run:8001
The supervisor is started with:
bash etc/run-supervisor.sh nersc 188.8.131.52 <nersc_user>
<nersc_user> must be a user that has a sirepo
development environment setup on cori.nersc.gov. You can try
nagler, which is a user that is often setup properly.
However, you should probably setup your own NERSC account.
TODO: describe how to setup cori.
Codeless Server Execution
You can run Sirepo without any of the scientific codes it supports by running the server this way:
$ SIREPO_FEATURE_CONFIG_SIM_TYPES=myapp sirepo service http
This runs the demo app, which is available at the following link: http://v.radia.run:8000/myapp.
As user vagrant:
If radia_run fails, run with debug:
radia_run debug sirepo-dev
Full Stack Sample Server Configuration
First, you need to setup Docker on CentOS/RHEL.
Here's a sample "full stack" server configuration. It runs with a
specific IP address (
10.10.10.40), because it is bound to a specific
sirpeo.v4.radia.run. It requires you have Vagrant and
VirtualBox installed, and that you are on a Mac or Linux box to
execute the initial curl installer.
mkdir v4 cd v4 curl https://radia.run | bash -s vagrant-centos7 vagrant ssh sudo su - -c 'radia_run redhat-docker' # first time disables SELinux; you'll see a message saying this exit vagrant reload vagrant ssh sudo su - -c 'radia_run redhat-docker' sudo su - yum install -y nginx cd / curl -s -S -L https://github.com/radiasoft/sirepo/wiki/images/v4-root.tgz | tar xzf - systemctl daemon-reload systemctl restart docker docker pull radiasoft/sirepo:dev systemctl start sirepo_job_supervisor systemctl start sirepo systemctl start nginx
You can access the server as https://sirepo.v4.radia.run from the local host.
Running jupyter lab in the VM
This assumes you will be viewing notebooks in a web browser on your host machine. You may need to install jupyter lab yourself. To do so, log in to the VM and enter:
pip install jupyterlab
You may see a warning similar to the following:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. ipywidgets 7.6.3 requires jupyterlab-widgets>=1.0.0; python_version >= "3.6", which is not installed.
in which case enter:
pip install jupyterlab-widgets
To run, enter
jupyter lab --ip=0.0.0.0
You will see output like
[I 01:02:03.713 LabApp] JupyterLab extension loaded from /home/vagrant/.pyenv/versions/3.7.2/envs/py3/lib/python3.7/site-packages/jupyterlab [I 01:02:03.714 LabApp] JupyterLab application directory is /home/vagrant/.pyenv/versions/3.7.2/envs/py3/share/jupyter/lab [I 01:02:03.718 LabApp] Serving notebooks from local directory: /home/vagrant [I 01:02:03.718 LabApp] Jupyter Notebook 6.1.4 is running at: [I 01:02:03.719 LabApp] http://v4.radia.run:8888/?token=7d606441da444e01736bdf1550991a4f39a7291f3632615a [I 01:02:03.719 LabApp] or http://127.0.0.1:8888/?token=7d606441da444e01736bdf1550991a4f39a7291f3632615a [I 01:02:03.720 LabApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [W 01:02:03.725 LabApp] No web browser found: could not locate runnable browser. [C 01:02:03.725 LabApp] To access the notebook, open this file in a browser: file:///home/vagrant/.local/share/jupyter/runtime/nbserver-14317-open.html Or copy and paste one of these URLs: http://v4.radia.run:8888/?token=7d606441da444e01736bdf1550991a4f39a7291f3632615a or http://127.0.0.1:8888/?token=7d606441da444e01736bdf1550991a4f39a7291f3632615a
Note that the URLs in the output above assume a browser running on the VM. In particular
http://127.0.0.1:8888 reference the guest (VM) and not the host (desktop) on which it is running.
Following this example, navigate to
http://10.10.10.40:8888... if necessary - see Single Server Execution for more on networking) on your desktop browser. You can also go to
http://v4.radia.run:8888/lab and enter the token when prompted. This may be preferable if you want to bookmark the server page, as the token gets regenerated. The browser will display the jupyter interface and you can begin running notebooks.
Note: the jupyter interface will not allow you to open directories above the one in which you started the server. Run it from
~ or a directory containing the notebooks you wish to run.
To stop the server, type ctrl-C twice (or once, then
y to confirm)
During the course of development you may find that jupyterlab fails to build, especially if you change versions frequently to test. One possible error for example would be
[webpack-cli] Error: Cannot find module '<my-module>/package.json'
even when that file appears to exist. If that is the case, try
jupyter labextension list
which will list the installed lab extensions like so:
JupyterLab v<version> Known labextensions: app dir: /home/vagrant/.pyenv/versions/<python_version>/envs/py3/share/jupyter/lab <my-module> vx.x.x enabled OK* local extensions: <my-module>: <path_to_module>/<my-module>/js
If you also see
Uninstalled core extensions: my-module
then jupyterlab is trying to incorporate a non-existent module into its build. Do
rm /home/vagrant/.pyenv/versions/<python version>/envs/py3/share/jupyter/lab/settings/build_config.json jupyter lab build
and the build should complete successfully.
The FLASH code is a proprietary code. Users must be granted access in order to use it.
In order to work on Sirepo with the FLASH code developers must either build the FLASH proprietary tarball from source or have access to an already built version (see sections below).
Developing from source
This method builds the proprietary FLASH tarball from the FLASH and Radiasoft source code. If you wish to work on Sirepo and don't need to build from source you only need to follow the instruction in the section below (Developing from tarball)
- You will need a copy of the FLASH source code (FLASH-4.6.2.tar.gz). You can get one yourself from the FLASH website. You need to be authorized by the FLASH Center for Computational Science for use of the FLASH code.
cd ~/src/radiasoft && gcl rsconf && cd rsconf && rsconf build
mv <path-to>/FLASH-4.6.2.tar.gz ~/src/radiasoft/rsconf/proprietary
- Follow the radiasoft/download Development Notes to start a development installer server
cd ~/src/radiasoft/rsconf && rsconf build
radia_run flash-tarball # Make sure to run this in the window where you exported the install_server
- You should now see
- Now follow the steps in the section below for working on Sirepo
Developing from tarball
- Make sure you have
flash-dev.tar.gzin ~/src/radiasoft/rsconf/proprietary. If you don't, follow the instructions above.
cd ~/src/radiasoft/sirepo && rm -rf run # Removing the run dir forces the sirepo dev setup to copy the FLASH tarball into the proper location. This can be done manually
- Start the Sirepo server with the FLASH code enabled (ex
SIREPO_FEATURE_CONFIG_PROPRIETARY_SIM_TYPES=flash sirepo service http)
- Visit v.radia.run:8000/flash