diff --git a/README.md b/README.md index d676b7f3..8dcca68c 100644 --- a/README.md +++ b/README.md @@ -1,83 +1,90 @@ -## Installation +# StochSS-Compute -#### Docker - -The easiest way to get stochss-compute running is with docker. Clone the repository and run the following in the root directory: +With StochSS-Compute, you can run GillesPy2 simulations on your own server. Results are cached and anonymized, so you +can easily save and recall previous simulations. +## Example Quick Start +First, clone the repository. ``` -docker-compose up --build +git clone https://github.com/StochSS/stochss-compute.git +cd stochss-compute ``` -#### Minikube -- first requires `minikube`, `docker`, and `kubectl` to be installed. Then: +- If you are unfamiliar with python virtual environments, read this [documentation](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/#creating-a-virtual-environment) first. +- Note that you will have to activate your venv every time you run stochSS-compute, as well as for your dask scheduler and each of its workers. +- The following will set up the `dask-scheduler`, one `dask-worker`, the backend api server, and launch an example `jupyter` notebook. +- Each of these must be run in separate terminal windows in the main `stochss-compute` directory. +- Just copy and paste! ``` -minikube start -cd into kubernetes directory -kubectl apply -f api_deployment.yaml -minikube dashboard +# Terminal 1 +python3 -m venv venv +source venv/bin/activate +pip3 install -r requirements.txt +dask-scheduler ``` -- Now, wait for the stochss-compute container to be created. - -From here, there are two ways to access the cluster. - -##### To set up local access: -`minikube service --url stochss-compute-service` -- exposes external IP (on EKS or otherwise this is handled by your cloud provider) -- use this host and IP when calling ComputeServer() -- first time will be slow because the dask containers have to start up - -##### To use ngrok to set up public access (ngrok.com to sign up for a free account and download/install): ``` -url=$(minikube service --url stochss-compute-service) -ngrok http $url +# Terminal 2 +source venv/bin/activate +dask-worker localhost:8786 ``` -- use this URL when calling ComputeServer() - -#### Manually - -Ensure that the following dependencies are installed with your package manager of choice: - -- `python-poetry` -- `redis` - -Clone the repository and navigate into the new `stochss-compute` directory. Once inside, execute the following command to install the Python dependencies: - ``` -poetry install +# Terminal 3 +source venv/bin/activate +python3 app.py ``` - -And to activate the new virtual environment: +- Stochss-compute is now running on localhost:1234. + ``` -poetry shell +# Terminal 4 +source venv/bin/activate +jupyter notebook --port=9999 ``` +- This notebook will show you how to use StochSS-compute. +- Jupyter should then launch automatically, where you can then navigate to the examples directory and open up StartHere.ipynb. +- If not, copy and paste the following URL into your web browser: +`http://localhost:9999/notebooks/examples/StartHere.ipynb` +#### Docker -Once complete, both `celery` and `redis` need to be running. - -``` -celery -A stochss_compute.api worker -l INFO -``` +An alternative installation to the above method is to use docker. We host an image on docker hub you can download and use simply by running the following line. -`redis` can be run in several ways. If you prefer a `systemd` daemon: ``` -sudo systemctl start redis +docker run -p 1234:1234 mdip226/stochss-compute:latest ``` -Otherwise: +- The `-p` flag publishes the container's exposed port on the host computer, as in `-p :` +- Stochss-compute is now running on localhost:1234. + + + -## Usage + diff --git a/examples/StartHere.ipynb b/examples/StartHere.ipynb index 18d43e7a..5bd5e2cc 100644 --- a/examples/StartHere.ipynb +++ b/examples/StartHere.ipynb @@ -4,16 +4,22 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Introduction \n", + "# Introduction to StochSS-Compute\n", " \n", - "Running a simulation with GillesPy2 requires only 2 components: a model (data), and a solver (algorithm)." + "If you have ever used Gillespy2 to run simulations locally, you should not have too much difficulty running remote simulations, as the syntax is nearly the same. Running a simulation remotely with GillesPy2 requires 3 components: a `Model()` (your data), a `Solver()` (SSA, TauHybrid, ODE, etc.), and a running instance of a `ComputeServer()`. If you do not wish to explicitly state the solver, one will be automatically chosen." ] }, { - "cell_type": "markdown", + "cell_type": "code", + "execution_count": 15, "metadata": {}, + "outputs": [], "source": [ - "## BASIC" + "import sys, os\n", + "import numpy\n", + "sys.path.append(os.path.abspath(os.path.join(os.getcwd(), '../')))\n", + "import gillespy2\n", + "from stochss_compute import RemoteSimulation, ComputeServer" ] }, { @@ -27,16 +33,10 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 8, "metadata": {}, "outputs": [], "source": [ - "import sys, os\n", - "import matplotlib.pyplot as plt\n", - "import numpy\n", - "sys.path.append(os.path.abspath(os.path.join(os.getcwd(), '../')))\n", - "import gillespy2\n", - "from stochss_compute import RemoteSimulation, ComputeServer\n", "class MichaelisMenten(gillespy2.Model):\n", " def __init__(self, parameter_values=None):\n", " #initialize Model\n", @@ -95,39 +95,62 @@ " self.timespan(numpy.linspace(0,100,101))" ] }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true, - "tags": [] - }, - "outputs": [], - "source": [ - "# Instantiate your model\n", - "model = MichaelisMenten()\n", - "model2 = MichaelisMenten()\n", - "\n", - "results = RemoteSimulation.on(ComputeServer(\n", - " \"55c5-75-143-225-24.ngrok.io\")).with_model(model).run()\n", - "# results.plot()" - ] - }, { "cell_type": "markdown", "metadata": {}, "source": [ - "### Running Simulations and Plotting" + "### Running Simulations and Plotting\n", + "\n", + "First, instantiate a `Model()` and a `ComputeServer()`. After you call `run()`, which returns your future `RemoteResults`, you will have to wait for the simulation to finish. Calling `wait()`, `status()`, `resolve()`, or `cancel()` allows you to interact with your results. Continue reading for more details on the various parameters that you may set." ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 9, "metadata": { + "scrolled": true, "tags": [] }, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "http://localhost:1234/api/v1/gillespy2/model/run\n" + ] + }, + { + "ename": "ValidationError", + "evalue": "1 validation error for ErrorResponse\n__root__\n Expecting value: line 1 column 1 (char 0) (type=value_error.jsondecode; msg=Expecting value; doc=\n\n \n OSError: Timed out trying to connect to tcp://localhost:8786 after 30 s // Werkzeug Debugger\n \n \n \n \n \n \n \n
\n

OSError

\n
\n

OSError: Timed out trying to connect to tcp://localhost:8786 after 30 s

\n
\n

Traceback (most recent call last)

\n
\n

\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/distributed/comm/core.py\",\n line 284,\n in connect

    \n
        # gh3104, gh4176, gh4167
    \n
        intermediate_cap = timeout / 5
    \n
        active_exception = None
    \n
        while time_left() > 0:
    \n
            try:
    \n
                comm = await asyncio.wait_for(
    \n
                    connector.connect(loc, deserialize=deserialize, **connection_args),
    \n
                    timeout=min(intermediate_cap, time_left()),
    \n
                )
    \n
                break
    \n
            except FatalCommClosedError:
    \n
    \n\n
  • \n

    File \"/usr/lib/python3.8/asyncio/tasks.py\",\n line 501,\n in wait_for

    \n
                fut.remove_done_callback(cb)
    \n
                # We must ensure that the task is not running
    \n
                # after wait_for() returns.
    \n
                # See https://bugs.python.org/issue32751
    \n
                await _cancel_and_wait(fut, loop=loop)
    \n
                raise exceptions.TimeoutError()
    \n
        finally:
    \n
            timeout_handle.cancel()
    \n
     
    \n
     
    \n
    async def _wait(fs, timeout, return_when, loop):
    \n
    \n\n
  • The above exception was the direct cause of the following exception:
    \n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/flask/app.py\",\n line 2091,\n in __call__

    \n
        def __call__(self, environ: dict, start_response: t.Callable) -> t.Any:
    \n
            """The WSGI server calls the Flask application object as the
    \n
            WSGI application. This calls :meth:`wsgi_app`, which can be
    \n
            wrapped to apply middleware.
    \n
            """
    \n
            return self.wsgi_app(environ, start_response)
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/flask/app.py\",\n line 2076,\n in wsgi_app

    \n
                try:
    \n
                    ctx.push()
    \n
                    response = self.full_dispatch_request()
    \n
                except Exception as e:
    \n
                    error = e
    \n
                    response = self.handle_exception(e)
    \n
                except:  # noqa: B001
    \n
                    error = sys.exc_info()[1]
    \n
                    raise
    \n
                return response(environ, start_response)
    \n
            finally:
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/flask/app.py\",\n line 2073,\n in wsgi_app

    \n
            ctx = self.request_context(environ)
    \n
            error: t.Optional[BaseException] = None
    \n
            try:
    \n
                try:
    \n
                    ctx.push()
    \n
                    response = self.full_dispatch_request()
    \n
                except Exception as e:
    \n
                    error = e
    \n
                    response = self.handle_exception(e)
    \n
                except:  # noqa: B001
    \n
                    error = sys.exc_info()[1]
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/flask/app.py\",\n line 1518,\n in full_dispatch_request

    \n
                request_started.send(self)
    \n
                rv = self.preprocess_request()
    \n
                if rv is None:
    \n
                    rv = self.dispatch_request()
    \n
            except Exception as e:
    \n
                rv = self.handle_user_exception(e)
    \n
            return self.finalize_request(rv)
    \n
     
    \n
        def finalize_request(
    \n
            self,
    \n
            rv: t.Union[ResponseReturnValue, HTTPException],
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/flask/app.py\",\n line 1516,\n in full_dispatch_request

    \n
            self.try_trigger_before_first_request_functions()
    \n
            try:
    \n
                request_started.send(self)
    \n
                rv = self.preprocess_request()
    \n
                if rv is None:
    \n
                    rv = self.dispatch_request()
    \n
            except Exception as e:
    \n
                rv = self.handle_user_exception(e)
    \n
            return self.finalize_request(rv)
    \n
     
    \n
        def finalize_request(
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/flask/app.py\",\n line 1502,\n in dispatch_request

    \n
                getattr(rule, "provide_automatic_options", False)
    \n
                and req.method == "OPTIONS"
    \n
            ):
    \n
                return self.make_default_options_response()
    \n
            # otherwise dispatch to the handler for that endpoint
    \n
            return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
    \n
     
    \n
        def full_dispatch_request(self) -> Response:
    \n
            """Dispatches the request and on top of that performs request
    \n
            pre and postprocessing as well as HTTP exception catching and
    \n
            error handling.
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/stochss_compute/api/v1/gillespy2/model.py\",\n line 72,\n in run

    \n
        # Build a list of job key values which will need to be run.
    \n
        keys = [f"{model_id}-trajectory_{i}" for i in range(number_trajectories)]
    \n
     
    \n
        # Run each trajectory and save in a dataset.
    \n
        # Warning: Here be possible threading issues. Needs investigation.
    \n
        dependencies = delegate.client.map(gillespy2.core.Model.run, [model] * number_trajectories, **run_request.kwargs, key=keys)
    \n
     
    \n
        cache_dir = delegate.cache_provider.config.root_dir
    \n
        os.makedirs(cache_dir, exist_ok=True)
    \n
     
    \n
        def join_results(results):
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/werkzeug/local.py\",\n line 432,\n in __get__

    \n
                    return self.class_value
    \n
     
    \n
                return self
    \n
     
    \n
            try:
    \n
                obj = instance._get_current_object()
    \n
            except RuntimeError:
    \n
                if self.fallback is None:
    \n
                    raise
    \n
     
    \n
                return self.fallback.__get__(instance, owner)  # type: ignore
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/werkzeug/local.py\",\n line 554,\n in _get_current_object

    \n
            """Return the current object.  This is useful if you want the real
    \n
            object behind the proxy at a time for performance reasons or because
    \n
            you want to pass the object into a different context.
    \n
            """
    \n
            if not hasattr(self.__local, "__release_local__"):  # type: ignore
    \n
                return self.__local()  # type: ignore
    \n
     
    \n
            try:
    \n
                return getattr(self.__local, self.__name)  # type: ignore
    \n
            except AttributeError:
    \n
                name = self.__name  # type: ignore
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/stochss_compute/api/v1/apiutils.py\",\n line 18,\n in get_delegate

    \n
        delegate_type = current_app.config["DELEGATE_TYPE"]
    \n
     
    \n
        # # May be None
    \n
        # kube_cluster = current_app.config["KUBE_CLUSTER"]
    \n
     
    \n
        delegate = delegate_type(delegate_config)
    \n
     
    \n
        if False in (delegate.connect(), delegate.test_connection()):
    \n
            raise Exception("Delegate connection failed.")
    \n
     
    \n
        return delegate
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/stochss_compute/api/delegate/dask_delegate.py\",\n line 70,\n in __init__

    \n
                if self.delegate_config.kube_cluster is not None:
    \n
                    self.client = Client(self.delegate_config.kube_cluster)
    \n
                    print(self.delegate_config.kube_cluster)
    \n
     
    \n
                else:
    \n
                    self.client = Client(f"{self.delegate_config.dask_cluster_address}:{self.delegate_config.dask_cluster_port}")
    \n
     
    \n
            # Setup functions to be run on the schedule.
    \n
            def __scheduler_job_exists(dask_scheduler, job_id: str) -> bool:
    \n
                return job_id in dask_scheduler.tasks
    \n
     
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/distributed/client.py\",\n line 766,\n in __init__

    \n
            )
    \n
     
    \n
            for ext in extensions:
    \n
                ext(self)
    \n
     
    \n
            self.start(timeout=timeout)
    \n
            Client._instances.add(self)
    \n
     
    \n
            from distributed.recreate_tasks import ReplayTaskClient
    \n
     
    \n
            ReplayTaskClient(self)
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/distributed/client.py\",\n line 948,\n in start

    \n
            self.status = "connecting"
    \n
     
    \n
            if self.asynchronous:
    \n
                self._started = asyncio.ensure_future(self._start(**kwargs))
    \n
            else:
    \n
                sync(self.loop, self._start, **kwargs)
    \n
     
    \n
        def __await__(self):
    \n
            if hasattr(self, "_started"):
    \n
                return self._started.__await__()
    \n
            else:
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/distributed/utils.py\",\n line 331,\n in sync

    \n
        else:
    \n
            while not e.is_set():
    \n
                e.wait(10)
    \n
        if error[0]:
    \n
            typ, exc, tb = error[0]
    \n
            raise exc.with_traceback(tb)
    \n
        else:
    \n
            return result[0]
    \n
     
    \n
     
    \n
    class LoopRunner:
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/distributed/utils.py\",\n line 314,\n in f

    \n
                    raise RuntimeError("sync() called from thread of running loop")
    \n
                yield gen.moment
    \n
                future = func(*args, **kwargs)
    \n
                if callback_timeout is not None:
    \n
                    future = asyncio.wait_for(future, callback_timeout)
    \n
                result[0] = yield future
    \n
            except Exception:
    \n
                error[0] = sys.exc_info()
    \n
            finally:
    \n
                assert thread_state.asynchronous > 0
    \n
                thread_state.asynchronous -= 1
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/tornado/gen.py\",\n line 762,\n in run

    \n
                    self.future = None
    \n
                    try:
    \n
                        exc_info = None
    \n
     
    \n
                        try:
    \n
                            value = future.result()
    \n
                        except Exception:
    \n
                            exc_info = sys.exc_info()
    \n
                        future = None
    \n
     
    \n
                        if exc_info is not None:
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/distributed/client.py\",\n line 1038,\n in _start

    \n
            if self.scheduler is None:
    \n
                self.scheduler = self.rpc(address)
    \n
            self.scheduler_comm = None
    \n
     
    \n
            try:
    \n
                await self._ensure_connected(timeout=timeout)
    \n
            except (OSError, ImportError):
    \n
                await self._close()
    \n
                raise
    \n
     
    \n
            for pc in self._periodic_callbacks.values():
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/distributed/client.py\",\n line 1098,\n in _ensure_connected

    \n
                return
    \n
     
    \n
            self._connecting_to_scheduler = True
    \n
     
    \n
            try:
    \n
                comm = await connect(
    \n
                    self.scheduler.address, timeout=timeout, **self.connection_args
    \n
                )
    \n
                comm.name = "Client->Scheduler"
    \n
                if timeout is not None:
    \n
                    await asyncio.wait_for(self._update_scheduler_info(), timeout)
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/distributed/comm/core.py\",\n line 308,\n in connect

    \n
                logger.debug(
    \n
                    "Could not connect to %s, waiting for %s before retrying", loc, backoff
    \n
                )
    \n
                await asyncio.sleep(backoff)
    \n
        else:
    \n
            raise OSError(
    \n
                f"Timed out trying to connect to {addr} after {timeout} s"
    \n
            ) from active_exception
    \n
     
    \n
        local_info = {
    \n
            **comm.handshake_info(),
    \n
    \n
\n
OSError: Timed out trying to connect to tcp://localhost:8786 after 30 s
\n
\n\n
\n

\n This is the Copy/Paste friendly version of the traceback.\n

\n \n
\n
\n The debugger caught an exception in your WSGI application. You can now\n look at the traceback which led to the error. \n If you enable JavaScript you can also use additional features such as code\n execution (if the evalex feature is enabled), automatic pasting of the\n exceptions and much more.\n
\n
\n Brought to you by DON'T PANIC, your\n friendly Werkzeug powered traceback interpreter.\n
\n
\n\n
\n
\n

Console Locked

\n

\n The console is locked and needs to be unlocked by entering the PIN.\n You can find the PIN printed out on the standard output of your\n shell that runs the server.\n

\n

PIN:\n \n \n

\n
\n
\n \n\n\n\n; pos=0; lineno=1; colno=1)", + "output_type": "error", + "traceback": [ + "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[0;31mJSONDecodeError\u001b[0m Traceback (most recent call last)", + "\u001b[0;32m~/Projects/stochss-compute/env/lib/python3.8/site-packages/pydantic/main.cpython-38-x86_64-linux-gnu.so\u001b[0m in \u001b[0;36mpydantic.main.BaseModel.parse_raw\u001b[0;34m()\u001b[0m\n", + "\u001b[0;32m~/Projects/stochss-compute/env/lib/python3.8/site-packages/pydantic/parse.cpython-38-x86_64-linux-gnu.so\u001b[0m in \u001b[0;36mpydantic.parse.load_str_bytes\u001b[0;34m()\u001b[0m\n", + "\u001b[0;32m/usr/lib/python3.8/json/__init__.py\u001b[0m in \u001b[0;36mloads\u001b[0;34m(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)\u001b[0m\n\u001b[1;32m 356\u001b[0m parse_constant is None and object_pairs_hook is None and not kw):\n\u001b[0;32m--> 357\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0m_default_decoder\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdecode\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0ms\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 358\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mcls\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m/usr/lib/python3.8/json/decoder.py\u001b[0m in \u001b[0;36mdecode\u001b[0;34m(self, s, _w)\u001b[0m\n\u001b[1;32m 336\u001b[0m \"\"\"\n\u001b[0;32m--> 337\u001b[0;31m \u001b[0mobj\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mend\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mraw_decode\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0ms\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0midx\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0m_w\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0ms\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mend\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 338\u001b[0m \u001b[0mend\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_w\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0ms\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mend\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mend\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m/usr/lib/python3.8/json/decoder.py\u001b[0m in \u001b[0;36mraw_decode\u001b[0;34m(self, s, idx)\u001b[0m\n\u001b[1;32m 354\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mStopIteration\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0merr\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 355\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0mJSONDecodeError\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"Expecting value\"\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0ms\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0merr\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalue\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 356\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mobj\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mend\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;31mJSONDecodeError\u001b[0m: Expecting value: line 1 column 1 (char 0)", + "\nDuring handling of the above exception, another exception occurred:\n", + "\u001b[0;31mValidationError\u001b[0m Traceback (most recent call last)", + "\u001b[0;32m/tmp/ipykernel_616269/2043053169.py\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0mmyServer\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mComputeServer\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"localhost\"\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mport\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m1234\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 5\u001b[0;31m \u001b[0mresults\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mRemoteSimulation\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mon\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmyServer\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mwith_model\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmyModel\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrun\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 6\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 7\u001b[0m \u001b[0mresults\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mresults\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mresolve\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m~/Projects/stochss-compute/stochss_compute/remote_simulation.py\u001b[0m in \u001b[0;36mrun\u001b[0;34m(self, **params)\u001b[0m\n\u001b[1;32m 66\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 67\u001b[0m \u001b[0mstart_request\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mModelRunRequest\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmodel\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmodel\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkwargs\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mparams\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 68\u001b[0;31m \u001b[0mstart_response\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0munwrap_or_err\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mJobStatusResponse\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mserver\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpost\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mEndpoint\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mGILLESPY2_MODEL\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0msub\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m\"/run\"\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mrequest\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mstart_request\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 69\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 70\u001b[0m \u001b[0mremote_results\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mRemoteResults\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mresult_id\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mstart_response\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mjob_id\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mserver\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mserver\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m~/Projects/stochss-compute/stochss_compute/remote_utils.py\u001b[0m in \u001b[0;36munwrap_or_err\u001b[0;34m(response_model, response)\u001b[0m\n\u001b[1;32m 8\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0munwrap_or_err\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mresponse_model\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mtype\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mBaseModel\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mresponse\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mResponse\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 9\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0mresponse\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mok\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 10\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0mException\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mErrorResponse\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mparse_raw\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mresponse\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtext\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmsg\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 11\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 12\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mresponse_model\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mparse_raw\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mresponse\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtext\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m~/Projects/stochss-compute/env/lib/python3.8/site-packages/pydantic/main.cpython-38-x86_64-linux-gnu.so\u001b[0m in \u001b[0;36mpydantic.main.BaseModel.parse_raw\u001b[0;34m()\u001b[0m\n", + "\u001b[0;31mValidationError\u001b[0m: 1 validation error for ErrorResponse\n__root__\n Expecting value: line 1 column 1 (char 0) (type=value_error.jsondecode; msg=Expecting value; doc=\n\n \n OSError: Timed out trying to connect to tcp://localhost:8786 after 30 s // Werkzeug Debugger\n \n \n \n \n \n \n \n
\n

OSError

\n
\n

OSError: Timed out trying to connect to tcp://localhost:8786 after 30 s

\n
\n

Traceback (most recent call last)

\n
\n

\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/distributed/comm/core.py\",\n line 284,\n in connect

    \n
        # gh3104, gh4176, gh4167
    \n
        intermediate_cap = timeout / 5
    \n
        active_exception = None
    \n
        while time_left() > 0:
    \n
            try:
    \n
                comm = await asyncio.wait_for(
    \n
                    connector.connect(loc, deserialize=deserialize, **connection_args),
    \n
                    timeout=min(intermediate_cap, time_left()),
    \n
                )
    \n
                break
    \n
            except FatalCommClosedError:
    \n
    \n\n
  • \n

    File \"/usr/lib/python3.8/asyncio/tasks.py\",\n line 501,\n in wait_for

    \n
                fut.remove_done_callback(cb)
    \n
                # We must ensure that the task is not running
    \n
                # after wait_for() returns.
    \n
                # See https://bugs.python.org/issue32751
    \n
                await _cancel_and_wait(fut, loop=loop)
    \n
                raise exceptions.TimeoutError()
    \n
        finally:
    \n
            timeout_handle.cancel()
    \n
     
    \n
     
    \n
    async def _wait(fs, timeout, return_when, loop):
    \n
    \n\n
  • The above exception was the direct cause of the following exception:
    \n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/flask/app.py\",\n line 2091,\n in __call__

    \n
        def __call__(self, environ: dict, start_response: t.Callable) -> t.Any:
    \n
            """The WSGI server calls the Flask application object as the
    \n
            WSGI application. This calls :meth:`wsgi_app`, which can be
    \n
            wrapped to apply middleware.
    \n
            """
    \n
            return self.wsgi_app(environ, start_response)
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/flask/app.py\",\n line 2076,\n in wsgi_app

    \n
                try:
    \n
                    ctx.push()
    \n
                    response = self.full_dispatch_request()
    \n
                except Exception as e:
    \n
                    error = e
    \n
                    response = self.handle_exception(e)
    \n
                except:  # noqa: B001
    \n
                    error = sys.exc_info()[1]
    \n
                    raise
    \n
                return response(environ, start_response)
    \n
            finally:
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/flask/app.py\",\n line 2073,\n in wsgi_app

    \n
            ctx = self.request_context(environ)
    \n
            error: t.Optional[BaseException] = None
    \n
            try:
    \n
                try:
    \n
                    ctx.push()
    \n
                    response = self.full_dispatch_request()
    \n
                except Exception as e:
    \n
                    error = e
    \n
                    response = self.handle_exception(e)
    \n
                except:  # noqa: B001
    \n
                    error = sys.exc_info()[1]
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/flask/app.py\",\n line 1518,\n in full_dispatch_request

    \n
                request_started.send(self)
    \n
                rv = self.preprocess_request()
    \n
                if rv is None:
    \n
                    rv = self.dispatch_request()
    \n
            except Exception as e:
    \n
                rv = self.handle_user_exception(e)
    \n
            return self.finalize_request(rv)
    \n
     
    \n
        def finalize_request(
    \n
            self,
    \n
            rv: t.Union[ResponseReturnValue, HTTPException],
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/flask/app.py\",\n line 1516,\n in full_dispatch_request

    \n
            self.try_trigger_before_first_request_functions()
    \n
            try:
    \n
                request_started.send(self)
    \n
                rv = self.preprocess_request()
    \n
                if rv is None:
    \n
                    rv = self.dispatch_request()
    \n
            except Exception as e:
    \n
                rv = self.handle_user_exception(e)
    \n
            return self.finalize_request(rv)
    \n
     
    \n
        def finalize_request(
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/flask/app.py\",\n line 1502,\n in dispatch_request

    \n
                getattr(rule, "provide_automatic_options", False)
    \n
                and req.method == "OPTIONS"
    \n
            ):
    \n
                return self.make_default_options_response()
    \n
            # otherwise dispatch to the handler for that endpoint
    \n
            return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
    \n
     
    \n
        def full_dispatch_request(self) -> Response:
    \n
            """Dispatches the request and on top of that performs request
    \n
            pre and postprocessing as well as HTTP exception catching and
    \n
            error handling.
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/stochss_compute/api/v1/gillespy2/model.py\",\n line 72,\n in run

    \n
        # Build a list of job key values which will need to be run.
    \n
        keys = [f"{model_id}-trajectory_{i}" for i in range(number_trajectories)]
    \n
     
    \n
        # Run each trajectory and save in a dataset.
    \n
        # Warning: Here be possible threading issues. Needs investigation.
    \n
        dependencies = delegate.client.map(gillespy2.core.Model.run, [model] * number_trajectories, **run_request.kwargs, key=keys)
    \n
     
    \n
        cache_dir = delegate.cache_provider.config.root_dir
    \n
        os.makedirs(cache_dir, exist_ok=True)
    \n
     
    \n
        def join_results(results):
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/werkzeug/local.py\",\n line 432,\n in __get__

    \n
                    return self.class_value
    \n
     
    \n
                return self
    \n
     
    \n
            try:
    \n
                obj = instance._get_current_object()
    \n
            except RuntimeError:
    \n
                if self.fallback is None:
    \n
                    raise
    \n
     
    \n
                return self.fallback.__get__(instance, owner)  # type: ignore
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/werkzeug/local.py\",\n line 554,\n in _get_current_object

    \n
            """Return the current object.  This is useful if you want the real
    \n
            object behind the proxy at a time for performance reasons or because
    \n
            you want to pass the object into a different context.
    \n
            """
    \n
            if not hasattr(self.__local, "__release_local__"):  # type: ignore
    \n
                return self.__local()  # type: ignore
    \n
     
    \n
            try:
    \n
                return getattr(self.__local, self.__name)  # type: ignore
    \n
            except AttributeError:
    \n
                name = self.__name  # type: ignore
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/stochss_compute/api/v1/apiutils.py\",\n line 18,\n in get_delegate

    \n
        delegate_type = current_app.config["DELEGATE_TYPE"]
    \n
     
    \n
        # # May be None
    \n
        # kube_cluster = current_app.config["KUBE_CLUSTER"]
    \n
     
    \n
        delegate = delegate_type(delegate_config)
    \n
     
    \n
        if False in (delegate.connect(), delegate.test_connection()):
    \n
            raise Exception("Delegate connection failed.")
    \n
     
    \n
        return delegate
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/stochss_compute/api/delegate/dask_delegate.py\",\n line 70,\n in __init__

    \n
                if self.delegate_config.kube_cluster is not None:
    \n
                    self.client = Client(self.delegate_config.kube_cluster)
    \n
                    print(self.delegate_config.kube_cluster)
    \n
     
    \n
                else:
    \n
                    self.client = Client(f"{self.delegate_config.dask_cluster_address}:{self.delegate_config.dask_cluster_port}")
    \n
     
    \n
            # Setup functions to be run on the schedule.
    \n
            def __scheduler_job_exists(dask_scheduler, job_id: str) -> bool:
    \n
                return job_id in dask_scheduler.tasks
    \n
     
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/distributed/client.py\",\n line 766,\n in __init__

    \n
            )
    \n
     
    \n
            for ext in extensions:
    \n
                ext(self)
    \n
     
    \n
            self.start(timeout=timeout)
    \n
            Client._instances.add(self)
    \n
     
    \n
            from distributed.recreate_tasks import ReplayTaskClient
    \n
     
    \n
            ReplayTaskClient(self)
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/distributed/client.py\",\n line 948,\n in start

    \n
            self.status = "connecting"
    \n
     
    \n
            if self.asynchronous:
    \n
                self._started = asyncio.ensure_future(self._start(**kwargs))
    \n
            else:
    \n
                sync(self.loop, self._start, **kwargs)
    \n
     
    \n
        def __await__(self):
    \n
            if hasattr(self, "_started"):
    \n
                return self._started.__await__()
    \n
            else:
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/distributed/utils.py\",\n line 331,\n in sync

    \n
        else:
    \n
            while not e.is_set():
    \n
                e.wait(10)
    \n
        if error[0]:
    \n
            typ, exc, tb = error[0]
    \n
            raise exc.with_traceback(tb)
    \n
        else:
    \n
            return result[0]
    \n
     
    \n
     
    \n
    class LoopRunner:
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/distributed/utils.py\",\n line 314,\n in f

    \n
                    raise RuntimeError("sync() called from thread of running loop")
    \n
                yield gen.moment
    \n
                future = func(*args, **kwargs)
    \n
                if callback_timeout is not None:
    \n
                    future = asyncio.wait_for(future, callback_timeout)
    \n
                result[0] = yield future
    \n
            except Exception:
    \n
                error[0] = sys.exc_info()
    \n
            finally:
    \n
                assert thread_state.asynchronous > 0
    \n
                thread_state.asynchronous -= 1
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/tornado/gen.py\",\n line 762,\n in run

    \n
                    self.future = None
    \n
                    try:
    \n
                        exc_info = None
    \n
     
    \n
                        try:
    \n
                            value = future.result()
    \n
                        except Exception:
    \n
                            exc_info = sys.exc_info()
    \n
                        future = None
    \n
     
    \n
                        if exc_info is not None:
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/distributed/client.py\",\n line 1038,\n in _start

    \n
            if self.scheduler is None:
    \n
                self.scheduler = self.rpc(address)
    \n
            self.scheduler_comm = None
    \n
     
    \n
            try:
    \n
                await self._ensure_connected(timeout=timeout)
    \n
            except (OSError, ImportError):
    \n
                await self._close()
    \n
                raise
    \n
     
    \n
            for pc in self._periodic_callbacks.values():
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/distributed/client.py\",\n line 1098,\n in _ensure_connected

    \n
                return
    \n
     
    \n
            self._connecting_to_scheduler = True
    \n
     
    \n
            try:
    \n
                comm = await connect(
    \n
                    self.scheduler.address, timeout=timeout, **self.connection_args
    \n
                )
    \n
                comm.name = "Client->Scheduler"
    \n
                if timeout is not None:
    \n
                    await asyncio.wait_for(self._update_scheduler_info(), timeout)
    \n
    \n\n
  • \n

    File \"/home/mdip/Projects/stochss-compute/env/lib/python3.8/site-packages/distributed/comm/core.py\",\n line 308,\n in connect

    \n
                logger.debug(
    \n
                    "Could not connect to %s, waiting for %s before retrying", loc, backoff
    \n
                )
    \n
                await asyncio.sleep(backoff)
    \n
        else:
    \n
            raise OSError(
    \n
                f"Timed out trying to connect to {addr} after {timeout} s"
    \n
            ) from active_exception
    \n
     
    \n
        local_info = {
    \n
            **comm.handshake_info(),
    \n
    \n
\n
OSError: Timed out trying to connect to tcp://localhost:8786 after 30 s
\n
\n\n
\n

\n This is the Copy/Paste friendly version of the traceback.\n

\n \n
\n
\n The debugger caught an exception in your WSGI application. You can now\n look at the traceback which led to the error. \n If you enable JavaScript you can also use additional features such as code\n execution (if the evalex feature is enabled), automatic pasting of the\n exceptions and much more.\n
\n
\n Brought to you by DON'T PANIC, your\n friendly Werkzeug powered traceback interpreter.\n
\n
\n\n
\n
\n

Console Locked

\n

\n The console is locked and needs to be unlocked by entering the PIN.\n You can find the PIN printed out on the standard output of your\n shell that runs the server.\n

\n

PIN:\n \n \n

\n
\n
\n \n\n\n\n; pos=0; lineno=1; colno=1)" + ] + } + ], "source": [ + "myModel = MichaelisMenten()\n", + "\n", + "myServer = ComputeServer(\"localhost\", port=1234)\n", + "\n", + "results = RemoteSimulation.on(myServer).with_model(myModel).run()\n", + "\n", + "results = results.resolve()\n", + "\n", "results.plot()" ] }, @@ -135,17 +158,16 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "While the model.run() function can be called without any given arguments, GillesPy2 provides several options for customizing your simulations. The following keyword arguments can be used in the model.run() function to customize your simulations:\n", + "While the model.run() function can be called without any given arguments, GillesPy2 provides several options for customizing your simulations. The following keyword arguments can be used in the model.run() function to customize your simulations:\n", "

\n", "### model.run() kwargs\n", "**solver=[solver]** \n", " manually choose a solver/algorithm one of the following GillesPy2 solvers: \n", - " [BasicODESolver()](./BasicExamples/Michaelis-Menten_Basic_ODE.ipynb) \n", - " [NumPySSASolver()](./BasicExamples/Michaelis-Menten_NumPy_SSA.ipynb) \n", - " [SSACSolver()](./BasicExamples/Michaelis-Menten_SSA_C.ipynb) \n", - " [CythonSSASolver()](./BasicExamples/Michaelis-Menten_Cython_SSA.ipynb) \n", - " [BasicTauLeapingSolver()](./BasicExamples/Michaelis-Menten_Basic_Tau_Leaping.ipynb) \n", - " [BasicTauHybridSolver()](./BasicExamples/Michaelis-Menten_Basic_Tau_Hybrid.ipynb) \n", + " [ODESolver()](./StartingModels/Michaelis-Menten_Basic_ODE.ipynb) \n", + " [SSASolver()](./StartingModels/Michaelis-Menten_NumPy_SSA.ipynb) \n", + " [SSACSolver()](./StartingModels/Michaelis-Menten_SSA_C.ipynb) \n", + " [TauLeapingSolver()](./StartingModels/Michaelis-Menten_Basic_Tau_Leaping.ipynb) \n", + " [TauHybridSolver()](./StartingModels/Michaelis-Menten_Basic_Tau_Hybrid.ipynb) \n", " \n", "**number_of_trajectories=1** \n", " [int]: Number of times to run the current simulation \n", @@ -177,20 +199,20 @@ "GillesPy2 also offers built-in offline plotly plotting and statistical data plotting. [See the documents for more details.](https://gillespy2.readthedocs.io) \n", "

\n", " \n", - "### solver specific kwargs\n", - "**BasicODESolver, BasicTauHybridSolver: integrator='lsoda'** \n", + "### Solver specific kwargs\n", + "**ODESolver, TauHybridSolver: integrator='lsoda'** \n", " [String]: \n", "integrator to be used form scipy.integrate.ode. Options include 'vode', 'zvode', 'lsoda', 'dopri5', and 'dop835'. For more details, see https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html \n", " \n", - "***BasicODESolver, BasicTauHybridSolver: integrator_options={}** \n", + "**ODESolver, TauHybridSolver: integrator_options={}** \n", " [dictionary]: \n", "contains options to the scipy integrator. for a list of options, see https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html \n", " \n", - "**BasicTauLeapingSolver, BasicTauHybridSolver: tau_tol=0.03** \n", + "**TauLeapingSolver, TauHybridSolver: tau_tol=0.03** \n", " [float]: \n", "Relative error tolerance value for calculating tau_step. value should be between 0.0-1.0 \n", " \n", - "**BasicTauHybridSolver: switch_tol=0.03** \n", + "**TauHybridSolver: switch_tol=0.03** \n", " [float]: \n", "Relative error tolerance value for switching between deterministic/stochastic. value should be between 0.0-1.0 \n", "

" @@ -209,13 +231,6 @@ "\n", "\n" ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] } ], "metadata": { diff --git a/stochss_compute/remote_results.py b/stochss_compute/remote_results.py index ddc1be1e..a4464831 100644 --- a/stochss_compute/remote_results.py +++ b/stochss_compute/remote_results.py @@ -156,5 +156,9 @@ def resolve(self) -> Results: return results def cancel(self): + """ + Cancels the remote job. + """ +# TODO stop_response = self.server.post(Endpoint.JOB, f"/{self.result_id}/stop")