Jupyter widgets for scheduling processes and visualizing the resulting (live) data. While it is designed to use custom data-specific visualizations (plot widgets) based on ipywidgets, visualizations for time series data (bqplot), animations (matplotlib) and 3D data (ipyvolume) are included.
To start a process a so-called job has to be set up. A job consists of a Python function and a configuration. The function contains the code (machine learning, simulation, ....) to be executed, for example:
import time as tim
def wait_n_times_x_ms(config, process_queue=None, return_dict=None):
n = config["parameter"]["n"]
x = config["parameter"]["x"]
time = list()
time_series = list()
for i in range(n):
time.append(i)
time_series.append(i / x)
process_queue.put(dict(progress=int(i/n * 100),
time=i,
time_series=i/x))
tim.sleep(x)
The configuration(s) can be read from a yaml file like this:
wait_40_times_100_ms:
parameter:
n: 40
x: 0.1
wait_10_times_300_ms:
parameter:
n: 10
x: 0.3
or can be defined as python dictionary.
Screencast.min.example.webm
Also live data can be monitored during script execution:
Screencast.min.example.live.webm
A more practical relevant example is the following simulation of a reaction wheel pendulum under state feedback control.
Screencast.pendulum.webm
Widgets can be easily connected if required:
Screencast.pendulum.link.webm
Since the simulation runs quite fast, a loop was put after it which sends the data with a delay. This shows that a job could also record live data from an experiment, so one could compare the simulation data with the measurements from a real pendulum.
Screencast.pendulum.live.webm
An example plot widget to view 3D data is also included, see heat_equation.ipynb. This one relies on ipyvolume.