Skip to content

Commit

Permalink
Update jupyterhub.md
Browse files Browse the repository at this point in the history
  • Loading branch information
mtrahan41 committed Dec 17, 2020
1 parent 8e5e17a commit 36ef421
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/gateways/jupyterhub.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ To start a notebook server, select one of the available options in the *Select j
* __Blanca (12hr)__ (A 12-hour, 1 core job on your default Blanca partition; only available to Blanca users)
* __Blanca CSDMS (12hr)__ (A 12-hour, 1 core job on the Blanca CSDMS partition; only available to Blanca CSDMS users)

___Note__: The "Summit interactive (12hr)" option spawns a 1-core job to an oversubscribed partition on Summit called "shas-interactive". This partition is intended to provide "instant" access to computing resources for JupyterHub users. The caveat is that 1) users may only run one "shas-interactive" job at a time, and 2) "shas-interactive" jobs only have 1 core and 1.2 GB of memory allocated to them. Therefore, this option works well for light work such as interactive code development and small processing tasks, but jobs may crash if large files are ingested or memory-intensive computing is conducted. If this is your case, please consider submitting your workflow va batch job on Summit, or try the "Summit Haswell (1 node, 12hr)" option (queue waits will be longer for this option). Dask users should either submit their workflows via a batch job on Summit, or use the "Summit Haswell (1 node, 12hr)" option because this provides 24-cores to the Dask array. Using "shas-interactive" for Dask jobs would only provide one core to the Dask array, negating its utility)._
___Note__: The "Summit interactive (12hr)" option spawns a 1-core job to an oversubscribed partition on Summit called "shas-interactive". This partition is intended to provide "instant" access to computing resources for JupyterHub users. The caveat is that 1) users may only run one "shas-interactive" job at a time, and 2) "shas-interactive" jobs only have 1 core and 1.2 GB of memory allocated to them. Therefore, this option works well for light work such as interactive code development and small processing tasks, but jobs may crash if large files are ingested or memory-intensive computing is conducted. If this is your case, please consider running your workflow va batch job on Summit, or try the "Summit Haswell (1 node, 12hr)" option (queue waits will be longer for this option). Dask users should either run their workflows via a batch job on Summit, or use the "Summit Haswell (1 node, 12hr)" option because this provides 24-cores to the Dask array. Using "shas-interactive" for Dask jobs would only provide one core to the Dask array, negating its utility)._

The server will take a few moments to start. When it does, you will be taken to the Jupyter home screen, which will show the contents of your CURC `/home` directory in the left menu bar. In the main work area on the right hand side you will see the "Launcher" and any other tabs you may have open from previous sessions.

Expand Down

0 comments on commit 36ef421

Please sign in to comment.