Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .github/workflows/mkdocs-release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,10 @@ on:
push:
branches: [branch-*\.*]

concurrency:
group: ${{ github.workflow }}
cancel-in-progress: false

jobs:
publish-release:
runs-on: ubuntu-latest
Expand Down
36 changes: 22 additions & 14 deletions docs/user_guides/projects/jobs/notebook_job.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,18 @@ description: Documentation on how to configure and execute a Jupyter Notebook jo

## Introduction

This guide describes how to configure a job to execute a Jupyter Notebook (.ipynb) and visualize the evaluated notebook.

All members of a project in Hopsworks can launch the following types of applications through a project's Jobs service:

- Python
- Apache Spark
- Ray

Launching a job of any type is very similar process, what mostly differs between job types is
the various configuration parameters each job type comes with. After following this guide you will be able to create a Jupyter Notebook job.
the various configuration parameters each job type comes with. Hopsworks support scheduling jobs to run on a regular basis,
e.g backfilling a Feature Group by running your feature engineering pipeline nightly. Scheduling can be done both through the UI and the python API,
checkout [our Scheduling guide](schedule_job.md).

## UI

Expand Down Expand Up @@ -167,19 +172,22 @@ execution = job.run(args='-p a 2 -p b 5', await_termination=True)
```

## Configuration
The following table describes the JSON payload returned by `jobs_api.get_configuration("PYTHON")`

| Field | Type | Description | Default |
|-------------------------|----------------|------------------------------------------------------|--------------------------|
| `type` | string | Type of the job configuration | `"pythonJobConfiguration"` |
| `appPath` | string | Project path to notebook (e.g `Resources/foo.ipynb`) | `null` |
| `environmentName` | string | Name of the python environment | `"pandas-training-pipeline"` |
| `resourceConfig.cores` | number (float) | Number of CPU cores to be allocated | `1.0` |
| `resourceConfig.memory` | number (int) | Number of MBs to be allocated | `2048` |
| `resourceConfig.gpus` | number (int) | Number of GPUs to be allocated | `0` |
| `logRedirection` | boolean | Whether logs are redirected | `true` |
| `jobType` | string | Type of job | `"PYTHON"` |
| `files` | string | HDFS path(s) to files to be provided to the Notebook Job. Multiple files can be included in a single string, separated by commas. <br>Example: `"hdfs:///Project/<project_name>/Resources/file1.py,hdfs:///Project/<project_name>/Resources/file2.txt"` | `null` |
The following table describes the job configuration parameters for a PYTHON job.

`conf = jobs_api.get_configuration("PYTHON")`

| Field | Type | Description | Default |
|-------|---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
| <nobr>`conf['type']`</nobr> | string | Type of the job configuration | `"pythonJobConfiguration"` |
| <nobr>`conf['appPath']`</nobr> | string | Project relative path to notebook (e.g., `Resources/foo.ipynb`) | `null` |
| <nobr>`conf['defaultArgs']`</nobr> | string | Arguments to pass to the notebook.<br>Will be overridden if arguments are passed explicitly via `Job.run(args="...")`.<br>Must conform to Papermill format `-p arg1 val1` | `null` |
| <nobr>`conf['environmentName']`</nobr> | string | Name of the project Python environment to use | `"pandas-training-pipeline"` |
| <nobr>`conf['resourceConfig']['cores']`</nobr> | float | Number of CPU cores to be allocated | `1.0` |
| <nobr>`conf['resourceConfig']['memory']`</nobr> | int | Number of MBs to be allocated | `2048` |
| <nobr>`conf['resourceConfig']['gpus']`</nobr> | int | Number of GPUs to be allocated | `0` |
| <nobr>`conf['logRedirection']`</nobr> | boolean | Whether logs are redirected | `true` |
| <nobr>`conf['jobType']`</nobr> | string | Type of job | `"PYTHON"` |
| <nobr>`conf['files']`</nobr> | string | Comma-separated string of HDFS path(s) to files to be made available to the application. Example: `hdfs:///Project/<project>/Resources/file1.py,...` | `null` |


## Accessing project data
Expand Down
48 changes: 27 additions & 21 deletions docs/user_guides/projects/jobs/pyspark_job.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,13 @@ description: Documentation on how to configure and execute a PySpark job on Hops

## Introduction

This guide will describe how to configure a job to execute a pyspark script inside the cluster.

All members of a project in Hopsworks can launch the following types of applications through a project's Jobs service:

- Python
- Apache Spark
- Ray

Launching a job of any type is very similar process, what mostly differs between job types is
the various configuration parameters each job type comes with. Hopsworks clusters support scheduling to run jobs on a regular basis,
Expand Down Expand Up @@ -212,27 +215,30 @@ print(f_err.read())
```

## Configuration
The following table describes the JSON payload returned by `jobs_api.get_configuration("PYSPARK")`

| Field | Type | Description | Default |
| ------------------------------------------ | -------------- |-----------------------------------------------------| -------------------------- |
| `type` | string | Type of the job configuration | `"sparkJobConfiguration"` |
| `appPath` | string | Project path to script (e.g `Resources/foo.py`) | `null` |
| `environmentName` | string | Name of the project spark environment | `"spark-feature-pipeline"` |
| `spark.driver.cores` | number (float) | Number of CPU cores allocated for the driver | `1.0` |
| `spark.driver.memory` | number (int) | Memory allocated for the driver (in MB) | `2048` |
| `spark.executor.instances` | number (int) | Number of executor instances | `1` |
| `spark.executor.cores` | number (float) | Number of CPU cores per executor | `1.0` |
| `spark.executor.memory` | number (int) | Memory allocated per executor (in MB) | `4096` |
| `spark.dynamicAllocation.enabled` | boolean | Enable dynamic allocation of executors | `true` |
| `spark.dynamicAllocation.minExecutors` | number (int) | Minimum number of executors with dynamic allocation | `1` |
| `spark.dynamicAllocation.maxExecutors` | number (int) | Maximum number of executors with dynamic allocation | `2` |
| `spark.dynamicAllocation.initialExecutors` | number (int) | Initial number of executors with dynamic allocation | `1` |
| `spark.blacklist.enabled` | boolean | Whether executor/node blacklisting is enabled | `false` |
| `files` | string | HDFS path(s) to files to be provided to the Spark application. Multiple files can be included in a single string, separated by commas. <br>Example: `"hdfs:///Project/<project_name>/Resources/file1.py,hdfs:///Project/<project_name>/Resources/file2.txt"` | `null` |
| `pyFiles` | string | HDFS path(s) to Python files to be provided to the Spark application. These will be added to the `PYTHONPATH` so they can be imported as modules. Multiple files can be included in a single string, separated by commas. <br>Example: `"hdfs:///Project/<project_name>/Resources/module1.py,hdfs:///Project/<project_name>/Resources/module2.py"` | `null` |
| `jars` | string | HDFS path(s) to JAR files to be provided to the Spark application. These will be added to the classpath. Multiple files can be included in a single string, separated by commas. <br>Example: `"hdfs:///Project/<project_name>/Resources/lib1.jar,hdfs:///Project/<project_name>/Resources/lib2.jar"` | `null` |
| `archives` | string | HDFS path(s) to archive files to be provided to the Spark application. Multiple files can be included in a single string, separated by commas. <br>Example: `"hdfs:///Project/<project_name>/Resources/archive1.zip,hdfs:///Project/<project_name>/Resources/archive2.tar.gz"` | `null` |
The following table describes the job configuration parameters for a PYSPARK job.

`conf = jobs_api.get_configuration("PYSPARK")`

| Field | Type | Description | Default |
|----------------------------------------------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------|
| <nobr>`conf['type']`</nobr> | string | Type of the job configuration | `"sparkJobConfiguration"` |
| <nobr>`conf['appPath']`</nobr> | string | Project path to spark program (e.g `Resources/foo.py`) | `null` |
| <nobr>`conf['defaultArgs']`</nobr> | string | Arguments to pass to the program. Will be overridden if arguments are passed explicitly via `Job.run(args="...")` | `null` |
| <nobr>`conf['environmentName']`</nobr> | string | Name of the project spark environment to use | `"spark-feature-pipeline"` |
| <nobr>`conf['spark.driver.cores']`</nobr> | float | Number of CPU cores allocated for the driver | `1.0` |
| <nobr>`conf['spark.driver.memory']`</nobr> | int | Memory allocated for the driver (in MB) | `2048` |
| <nobr>`conf['spark.executor.instances']`</nobr> | int | Number of executor instances | `1` |
| <nobr>`conf['spark.executor.cores']`</nobr> | float | Number of CPU cores per executor | `1.0` |
| <nobr>`conf['spark.executor.memory']`</nobr> | int | Memory allocated per executor (in MB) | `4096` |
| <nobr>`conf['spark.dynamicAllocation.enabled']`</nobr> | boolean | Enable dynamic allocation of executors | `true` |
| <nobr>`conf['spark.dynamicAllocation.minExecutors']`</nobr> | int | Minimum number of executors with dynamic allocation | `1` |
| <nobr>`conf['spark.dynamicAllocation.maxExecutors']`</nobr> | int | Maximum number of executors with dynamic allocation | `2` |
| <nobr>`conf['spark.dynamicAllocation.initialExecutors']`</nobr> | int | Initial number of executors with dynamic allocation | `1` |
| <nobr>`conf['spark.blacklist.enabled']`</nobr> | boolean | Whether executor/node blacklisting is enabled | `false` |
| <nobr>`conf['files']`</nobr> | string | Comma-separated string of HDFS path(s) to files to be made available to the application. Example: `hdfs:///Project/<project_name>/Resources/file1.py,...` | `null` |
| <nobr>`conf['pyFiles']`</nobr> | string | Comma-separated string of HDFS path(s) to python modules to be made available to the application. Example: `hdfs:///Project/<project_name>/Resources/file1.py,...` | `null` |
| <nobr>`conf['jars']`</nobr> | string | Comma-separated string of HDFS path(s) to jars to be included in CLASSPATH. Example: `hdfs:///Project/<project_name>/Resources/app.jar,...` | `null` |
| <nobr>`conf['archives']`</nobr> | string | Comma-separated string of HDFS path(s) to archives to be made available to the application. Example: `hdfs:///Project/<project_name>/Resources/archive.zip,...` | `null` |


## Accessing project data
Expand Down
Loading