Skip to content

Spelling fixes #183

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
May 5, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/source/release_notes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Release Notes

2.8.4
-----
Release date: May 4, 2023
Release date: May 5, 2023

* Added support for creating ADSDataset from pandas dataframe.
* Added support for multi-model deployment using Triton.
Expand Down Expand Up @@ -272,7 +272,7 @@ Release date: March 3, 2022

Release date: February 4, 2022

* Fixed bug in DataFlow ``Job`` creation.
* Fixed bug in Data Flow ``Job`` creation.
* Fixed bug in ADSDataset ``get_recommendations`` raising ``HTML is not defined`` exception.
* Fixed bug in jobs ``ScriptRuntime`` causing the parent artifact folder to be zipped and uploaded instead of the specified folder.
* Fixed bug in ``ModelDeployment`` raising ``TypeError`` exception when updating an existing model deployment.
Expand Down
8 changes: 4 additions & 4 deletions docs/source/user_guide/apachespark/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,13 @@ Quick Start

Data Flow is a hosted Apache Spark server. It is quick to start, and can scale to handle large datasets in parallel. ADS provides a convenient API for creating and maintaining workloads on Data Flow.

Submit a Toy Python Script to DataFlow
======================================
Submit a Toy Python Script to Data Flow
=======================================

From a Python Environment
-------------------------

Submit a python script to DataFlow entirely from your python environment.
Submit a python script to Data Flow entirely from your python environment.
The following snippet uses a toy python script that prints "Hello World"
followed by the spark version, 3.2.1.

Expand Down Expand Up @@ -111,7 +111,7 @@ Assuming you have the following two files written in your current directory as `
Real Data Flow Example with Conda Environment
=============================================

From PySpark v3.0.0 and onwards, Data Flow allows a published conda environment as the `Spark runtime environment <https://spark.apache.org/docs/latest/api/python/user_guide/python_packaging.html#using-conda>`_ when built with `ADS`. Data Flow supports published conda environments only. Conda packs are tar'd conda environments. When you publish your own conda packs to object storage, ensure that the DataFlow Resource has access to read the object or bucket.
From PySpark v3.0.0 and onwards, Data Flow allows a published conda environment as the `Spark runtime environment <https://spark.apache.org/docs/latest/api/python/user_guide/python_packaging.html#using-conda>`_ when built with `ADS`. Data Flow supports published conda environments only. Conda packs are tar'd conda environments. When you publish your own conda packs to object storage, ensure that the Data Flow Resource has access to read the object or bucket.
Below is a more built-out example using conda packs:

From a Python Environment
Expand Down
2 changes: 0 additions & 2 deletions docs/source/user_guide/jobs/data_science_job.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,6 @@ Quick Start

See :doc:`policies` and `About Data Science Policies <https://docs.oracle.com/en-us/iaas/data-science/using/policies.htm>`_.

.. include:: ../jobs/toc_local.rst

Define a Job
============

Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/jobs/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,4 +28,4 @@ Each model can write its results to the Logging service or Object Storage.
Then you can run a final sequential job that uses the best model class, and trains the final model on the entire dataset.

The following sections provides details on running workloads with OCI Data Science Jobs using ADS Jobs APIs.
You can use similar APIs to :doc:`Run a OCI DataFlow Application <../apachespark/quickstart>`.
You can use similar APIs to :doc:`Run a OCI Data Flow Application <../apachespark/quickstart>`.
8 changes: 4 additions & 4 deletions docs/source/user_guide/jobs/yaml_schema.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,8 @@ Following is the YAML schema for validating the YAML using `Cerberus <https://do
:linenos:


DataFlow
========
Data Flow
=========

.. raw:: html
:file: ../../yaml_schema/jobs/dataFlow.html
Expand Down Expand Up @@ -126,8 +126,8 @@ Following is the YAML schema for validating the YAML using `Cerberus <https://do
:linenos:


DataFlow Runtime
--------------
Data Flow Runtime
----------------

.. raw:: html
:file: ../../yaml_schema/jobs/dataFlowRuntime.html
Expand Down
2 changes: 1 addition & 1 deletion docs/source/yaml_schema/jobs/job.html
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ <h4 id="job.spec"><code>job.spec</code> schema</h4>
<code>dict</code>
</td>
<td>
See Data Science Job or DataFlow schema.
See Data Science Job or Data Flow schema.
</td>
</tr>

Expand Down
2 changes: 1 addition & 1 deletion docs/source/yaml_schema/jobs/job.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ spec:
type: string
infrastructure:
type: dict
meta: See Data Science Job or DataFlow schema.
meta: See Data Science Job or Data Flow schema.
name:
required: false
type: string
Expand Down