Development
General
Supported Codes
- Activait
- Controls
- elegant
- FLASH
- Genesis
- JSPEC
- JupyterHub
- MAD-X
- OPAL
- Radia
- Shadow
- Synchrotron Radiation Workshop (SRW)
- Synergia
- Warp PBA
- Warp VND
- Zgoubi
Sirepo interface
- Authentication and Account Creation
- How Your Sirepo Workspace Works
- Navigating the Sirepo Simulations Interface
- How to upload a lattice file
- How to share a Sirepo simulation via URL
- How Example simulations work
- How to report a bug in Sirepo
- Using lattice files in Sirepo
- Resetting an Example Simulation to default
SRW Help Center
- Backup SRW Sirepo simulations
- SRW Aperture
- SRW Brilliance Report
- SRW Circular Cylinder Mirror
- SRW CRL
- SRW Crystal
- SRW Electron Beam
- SRW Elliptical Cylinder Mirror
- SRW Fiber
- SRW Flux
- SRW Fully Coherent Gaussian Beam
- SRW Import Python or JSON Simulation File
- SRW Initial Wavefront Simulation Grid
- SRW Intensity Report
- SRW Planar Mirror
- SRW Power Density Report
- SRW Propagation Parameters
- SRW Single Electron Spectrum Report
- SRW Spherical Mirror
- SRW Toroid Mirror
- SRW Watchpoint
- SRW Additional Documentation
Development
- Development
- Coding Style
- Dev NewReportExample
- Adding a field to an application
- Interacting with Simulations Architecture
- Job system architecture overview
- JobManagement
- Debugging the job system
- VTK
Project Management
Clone this wiki locally
Developing Sirepo
Sirepo is fully open source, as are most of its codes. We are happy to support you, just submit an issue if you have questions.
Sirepo runs on Linux. We use Vagrant and VirtualBox. You will need to install Vagrant/Virtualbox manually before doing anything below.
We deploy using Docker.
We rely heavily on a simple curl installer structure in our download repo. If you are not comfortable with curl installers, please feel free to follow the installer scripts mentioned below.
If you use a Mac, read on. Otherwise, skip to PC Install. We use Macs so they are the best supported.
Once installed, run the server.
Server-side Technologies
Client-side Technologies
Mac Install
Once Vagrant is installed, run the vagrant-sirepo-dev installer on your Mac:
mkdir v
cd v
curl https://radia.run | vagrant_dev_no_nfs_src=1 bash -s vagrant-sirepo-dev
vagrant ssh
The directory must be named "v", which will be used as a basis
for the hostname v.radia.run
. The rest of this page assumes
v.radia.run
is the hostname.
The vagrant_dev_no_nfs_src=1
turns off sharing ~/src
between the
host (Mac) and guest (VM). This depends on how you develop. If you
would like to use an IDE like PyCharm, you might want to share ~/src
with the VM. This way you can edit files locally on your Mac. In this case,
you would use the command:
curl https://radia.run | bash -s vagrant-sirepo-dev
If you do this, you may want to have a symlink on your mac from
/home/vagrant
to /Users/<your-user>
so that you can directly
reference file names in error messages output by sirepo. Make sure /home
on your Mac is chmod 755
.
The host defaults to v.radia.run
(ip 10.10.10.10). You can also
specify a different host as an argument to vagrant-sirepo-dev
, e.g.
curl https://radia.run | bash -s vagrant-sirepo-dev v3.radia.run
The host must be of the form v[1-9].radia.run
.
Next step: Single Server Execution.
PC Install
You can develop on Windows or Linux with Vagrant. You just have to run the install manually.
Linux Note: Always use the repos configured by vagrantup.com and virtualbox.org, and not the default that comes with your distro. We know for sure that Ubuntu's VirtualBox doesn't work properly.
Once you have installed VirtualBox and Vagrant, create a directory, and use this Vagrantfile:
# -*-ruby-*-
Vagrant.configure("2") do |config|
config.vm.box = "fedora/32-cloud-base"
config.vm.hostname = "v.radia.run"
config.vm.network "private_network", ip: "10.10.10.10"
config.vm.provider "virtualbox" do |v|
v.customize ["guestproperty", "set", :id, "/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold", 5000]
# https://stackoverflow.com/a/36959857/3075806
v.customize ["setextradata", :id, "VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled", "0"]
# If you see network restart or performance issues, try this:
# https://github.com/mitchellh/vagrant/issues/8373
# v.customize ["modifyvm", :id, "--nictype1", "virtio"]
#
# Needed for compiling some the larger codes
v.memory = 8192
v.cpus = 4
end
config.ssh.forward_x11 = false
# https://stackoverflow.com/a/33137719/3075806
# Undo mapping of hostname to 127.0.0.1
config.vm.provision "shell",
inline: "sed -i '/127.0.0.1.*v.radia.run/d' /etc/hosts"
end
Then install the vbguest plugin:
> vagrant plugin install vagrant-vbguest
This will make sure your time on the machine stays up to date, and also allow you to mount directories from the host. Once the plugin is installed, run:
> vagrant up
Once booted:
> vagrant ssh
And inside the guest VM run the redhat-base installer:
$ curl https://radia.run | bash -s redhat-dev
$ exit
This sets up a lot of environment so logging out is a good idea, then login again and run the sirepo-dev installer:
$ vagrant ssh
$ curl https://radia.run | bash -s sirepo-dev
$ exit
This installs all the codes used by sirepo, and it's fully automatic so
go have lunch, and it will be done. Make sure you exit
, because you
will need to refresh your login environment.
Next step: Single Server Execution.
Simple Server (Runner) Execution
Once installed by one of the methods above, you will have a sirepo development environment. To run sirepo locally, run:
$ cd ~/src/radiasoft/sirepo
$ sirepo service http
uWSGI and NGINX
In production we use uWSGI and NGINX. To run the server using this configuration in development
$ cd ~/src/radiasoft/sirepo
$ sirepo service uwsgi
# In a different terminal window
$ sirepo service nginx-proxy
# In another terminal window
$ bash etc/run-supervisor.sh local
Navigate to v.radia.run:8080 (note: 8080 not the normal 8000) to access Sirepo.
Vagrant sets up a private network. You can access the server at http://v.radia.run:8000. However, some networks block resolutions to private internet addresses so you may have to visit http://10.10.10.10:8000 (this is the case, for example, on Macs with no active internet connection).
Developing with SBatch and Docker
The sirepo service http
setup is used for basic application
development using the
local job driver.
However, you may want to use the
sbatch
or
docker
drivers for multi-node execution environments.
Job execution is handled by four process components:
-
Flask server
sirepo service http
which receives messages from the GUI -
Tornado supervisor
sirepo job_supervisor
which brokers messages between the server and agents -
Tornado agents
sirepo job_agent
, started by job drivers which execute allow the supervisor to execute jobs in different environments. -
Command line job commands (
sirepo job_cmd
), started by agents, which run thetemplate
functions in the execution environment.
The job supervisor environment can support executing codes in a local process, on Docker-enabled nodes, on Slurm clusters, and at NERSC (assuming the user has credentials for accessing NERSC).
We can configure three of these environments for development automatically.
Local Jobs
The Flask server can be started manually for the local job driver by running:
bash etc/run-server.sh local
The Tornado job supervisor separately for the local job driver by running:
bash etc/run-supervisor.sh local
This is sufficient for a single node execution in development or in private network environment. Do not run the local driver in public environments or where security is a concern.
Docker Jobs
To enable Docker execution, you will need to install docker on your VM:
sudo su - -c 'radia_run redhat-docker'
This will require a reboot and a logout/login. Once you have Docker setup, you will start the job supervisor:
The server can be started with:
bash etc/run-server.sh docker
The job supervisor using the docker job driver:
bash etc/run-supervisor.sh docker
Slurm Jobs
You can run Slurm jobs locally, too, but you need to install Slurm:
radia_run slurm-dev
The server can be started with:
bash etc/run-server.sh sbatch
Start the supervisor:
bash etc/run-supervisor.sh sbatch
If you want to run sbatch on another node, you can specify that
configure it with (on a Mac or Linux), e.g. create a VM called v8.radia.run
:
mkdir ~/v8
cd ~/v8
radia_run vagrant-sirepo-dev
vssh
radia_run slurm-dev
Then start the supervisor on v.radia.run
with:
bash etc/run-supervisor.sh sbatch v8.radia.run
NERSC Jobs
In order to run on cori.nersc.gov, you need to a socket open so that
Cori can reach the server. This can be accomplished through a reverse proxy or socat
running on a server with a public
IP address.
You start the server basically the same way.
bash etc/run-server.sh nersc
Let's say the public IP address is 1.2.3.4
and the server is running
on port 8001 on your VM (v.radia.run
) on that public server.
Start socat
which forward port 8001:
socat -d TCP-LISTEN:8001,fork,reuseaddr TCP:v.radia.run:8001
The supervisor is started with:
bash etc/run-supervisor.sh nersc 1.2.3.4:8001 <nersc_user>
To be able to reach Sirepo running on the remote server from the browser on your computer you'll want to setup
ssh local forwarding. In your ~/.ssh/config
add
Host foo
HostName 1.2.3.4
LocalForward 8000 v.radia.run:8000
Then go to 127.0.0.1:8000 in your browser and traffic will be forwarded.
The <nersc_user>
must be a user that has a sirepo
development environment setup on cori.nersc.gov.
To setup the development environment on NERSC you'll need to do a few things.
SSH into nersc ssh <username>@cori.nersc.gov
Install a python environment curl radia.run | bash -s nersc-pyenv
Install sirepo and pykern:
$ mkdir -p ~/src/radiasoft/
$ cd $_
$ git clone https://github.com/radiasoft/pykern.git
$ cd pykern
$ pip install -e .
$ cd ../
$ git clone https://github.com/radiasoft/sirepo.git
$ pip install -e .
Pull the shifter image shifterimg pull docker:radiasoft/sirepo:dev
Codeless Server Execution
You can run Sirepo without any of the scientific codes:
$ SIREPO_FEATURE_CONFIG_SIM_TYPES=myapp sirepo service http
This runs the demo app, which is available at the following link: http://v.radia.run:8000/myapp.
Updating VM
As user vagrant:
radia_run sirepo-dev
If radia_run fails, run with debug:
radia_run debug sirepo-dev
Full Stack Sample Server Configuration
First, you need to setup Docker on CentOS/RHEL.
Here's a sample "full stack" server configuration. It runs with a
specific IP address (10.10.10.40
), because it is bound to a specific
domain name sirepo.v4.radia.run
. It requires you have Vagrant and
VirtualBox installed, and that you are on a Mac or Linux box to
execute the initial curl installer.
mkdir v4
cd v4
curl https://radia.run | bash -s vagrant-centos7
vagrant ssh
sudo su - -c 'radia_run redhat-docker'
# first time disables SELinux; you'll see a message saying this
exit
vagrant reload
vagrant ssh
sudo su - -c 'radia_run redhat-docker'
sudo su -
yum install -y nginx
cd /
curl -s -S -L https://github.com/radiasoft/sirepo/wiki/images/v4-root.tgz | tar xzf -
systemctl daemon-reload
systemctl restart docker
docker pull radiasoft/sirepo:dev
systemctl start sirepo_job_supervisor
systemctl start sirepo
systemctl start nginx
You can access the server as https://sirepo.v4.radia.run from the local host.
Running jupyter lab in the VM
This assumes you will be viewing notebooks in a web browser on your host machine. You may need to install jupyter lab yourself. To do so, log in to the VM and enter:
pip install jupyterlab
You may see a warning similar to the following:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
ipywidgets 7.6.3 requires jupyterlab-widgets>=1.0.0; python_version >= "3.6", which is not installed.
in which case enter:
pip install jupyterlab-widgets
To run, enter
jupyter lab --ip=0.0.0.0
You will see output like
[I 01:02:03.713 LabApp] JupyterLab extension loaded from /home/vagrant/.pyenv/versions/3.7.2/envs/py3/lib/python3.7/site-packages/jupyterlab
[I 01:02:03.714 LabApp] JupyterLab application directory is /home/vagrant/.pyenv/versions/3.7.2/envs/py3/share/jupyter/lab
[I 01:02:03.718 LabApp] Serving notebooks from local directory: /home/vagrant
[I 01:02:03.718 LabApp] Jupyter Notebook 6.1.4 is running at:
[I 01:02:03.719 LabApp] http://v4.radia.run:8888/?token=7d606441da444e01736bdf1550991a4f39a7291f3632615a
[I 01:02:03.719 LabApp] or http://127.0.0.1:8888/?token=7d606441da444e01736bdf1550991a4f39a7291f3632615a
[I 01:02:03.720 LabApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 01:02:03.725 LabApp] No web browser found: could not locate runnable browser.
[C 01:02:03.725 LabApp]
To access the notebook, open this file in a browser:
file:///home/vagrant/.local/share/jupyter/runtime/nbserver-14317-open.html
Or copy and paste one of these URLs:
http://v4.radia.run:8888/?token=7d606441da444e01736bdf1550991a4f39a7291f3632615a
or http://127.0.0.1:8888/?token=7d606441da444e01736bdf1550991a4f39a7291f3632615a
Note that the URLs in the output above assume a browser running on the VM. In particular file:///...
and http://127.0.0.1:8888
reference the guest (VM) and not the host (desktop) on which it is running.
Following this example, navigate to http://v4.radia.run:8888/lab?token=7d606441da444e01736bdf1550991a4f39a7291f3632615a
(or http://10.10.10.40:8888...
if necessary - see Single Server Execution for more on networking) on your desktop browser. You can also go to http://v4.radia.run:8888/lab
and enter the token when prompted. This may be preferable if you want to bookmark the server page, as the token gets regenerated. The browser will display the jupyter interface and you can begin running notebooks.
Note: the jupyter interface will not allow you to open directories above the one in which you started the server. Run it from ~
or a directory containing the notebooks you wish to run.
To stop the server, type ctrl-C twice (or once, then y
to confirm)
Troubleshooting
During the course of development you may find that jupyterlab fails to build, especially if you change versions frequently to test. One possible error for example would be
[webpack-cli] Error: Cannot find module '<my-module>/package.json'
even when that file appears to exist. If that is the case, try
jupyter labextension list
which will list the installed lab extensions like so:
JupyterLab v<version>
Known labextensions:
app dir: /home/vagrant/.pyenv/versions/<python_version>/envs/py3/share/jupyter/lab
<my-module> vx.x.x enabled OK*
local extensions:
<my-module>: <path_to_module>/<my-module>/js
If you also see
Uninstalled core extensions:
my-module
then jupyterlab is trying to incorporate a non-existent module into its build. Do
rm /home/vagrant/.pyenv/versions/<python version>/envs/py3/share/jupyter/lab/settings/build_config.json
jupyter lab build
and the build should complete successfully.
Developing Jupyterlab
- We have a dedicated repo for developing jupyterlab extensions
- See rsjupyterlab repo for more
Developing Jupyterhub
Running jupyterhub development server
- You can do so with the bash script in sirepo/etc
bash etc/run-jupyterhub.sh
and navigate to v.radia.run:8080/jupyter
- You may be prompted for an email address. Use
vagrant@x.x
and then check your terminal for something along these lineshttp://v.radia.run:8000/auth-email-authorize/asdf932rijagoijw3
- Paste that URL into your browser it should take you to jupyterhub
Updating Jupyterhub UI
- Jupyterhub has templates that define the jupyterhub UI which live in
$(pyenv prefix)/share/jupyterhub/templates
. - To modify the UI in a clean way, you can create child templates that
inherit from the ones in
$(pyenv prefix)/share/jupyterhub/templates
. For example,
{% extends "templates/page.html" %}
{% block nav_bar_right_items %}
<li>
item
</li>
{{ super() }}
{% endblock %}
The above example inherits from the jupyterhub source code templates and adds a list item to the nav_bar_right_items block. The call to super() ensures that all the stuff in the parent template nav_bar_right_items populates the block in that part of the template
- Your child templates should live in
sirepo/package_data/jupyterhub_templates/
- jupyterhub_conf.py.jinja points jupyterhub to your child templates
with this line:
c.JupyterHub.template_paths = [sirepo.jupyterhub.template_dirs()]
Troubleshooting
- If the jupyterhub dev server crashes due to some error, then the other server processes might not properly exit. This can result in issues when you try to restart jupyterhub dev server
- To fix this, you may need to kill these processes manually eg.
pkill -f uwsgi
pkill -f nginx
pkill -f sirepo
- other process might be left running in which case use
ps x
to inspect. You can dolsof -i :<port no.>
see what is running on port thenpkill -f <that process name>
as well.
Developing FLASH
The FLASH code is a proprietary code. Users must be granted access in order to use it.
In order to work on Sirepo with the FLASH code developers must either build the FLASH proprietary tarball from source or have access to an already built version (see sections below).
Developing from source
This method builds the proprietary FLASH tarball from the FLASH and Radiasoft source code. If you wish to work on Sirepo and don't need to build from source you only need to follow the instruction in the section below (Developing from tarball)
- You will need a copy of the FLASH source code (FLASH-4.6.2.tar.gz). You can get one yourself from the FLASH website. You need to be authorized by the FLASH Center for Computational Science for use of the FLASH code.
cd ~/src/radiasoft && gcl rsconf && cd rsconf && rsconf build
mv <path-to>/FLASH-4.6.2.tar.gz ~/src/radiasoft/rsconf/proprietary
- Follow the radiasoft/download Development Notes to start a development installer server
cd ~/src/radiasoft/rsconf && rsconf build
radia_run flash-tarball # Make sure to run this in the window where you exported the install_server
- You should now see
flash-dev.tar.gz
in ~/src/radiasoft/rsconf/proprietary - Now follow the steps in the section below for working on Sirepo
Developing from tarball
- Make sure you have
flash-dev.tar.gz
in ~/src/radiasoft/rsconf/proprietary. If you don't, follow the instructions above. cd ~/src/radiasoft/sirepo && rm -rf run # Removing the run dir forces the sirepo dev setup to copy the FLASH tarball into the proper location. This can be done manually
- Start the Sirepo server with the FLASH code enabled (ex
SIREPO_FEATURE_CONFIG_PROPRIETARY_SIM_TYPES=flash sirepo service http
) - Visit v.radia.run:8000/flash
License: http://www.apache.org/licenses/LICENSE-2.0.html
Copyright ©️ 2015–2020 RadiaSoft LLC. All Rights Reserved.