Skip to content

Commit dd2f265

Browse files
committed
Updated README and documentation.
1 parent 01fa3fd commit dd2f265

File tree

6 files changed

+397
-42
lines changed

6 files changed

+397
-42
lines changed

README.md

Lines changed: 42 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,32 @@ This project is under active development. See the [Documentation](https://codefl
4545

4646
### Run in your laptop
4747

48+
#### Instaling locally
49+
50+
CodeFlare can be installed from PyPI.
51+
52+
Prerequisites:
53+
* [Python 3.8+](https://www.python.org/downloads/)
54+
* [Jupyter Lab](https://www.python.org/downloads/) *(to run examples)*
55+
56+
We recommend installing Python 3.8.7 using
57+
[pyenv](https://github.com/pyenv/pyenv).
58+
59+
60+
Install from PyPI:
61+
```bash
62+
pip3 install --upgrade codeflare
63+
```
64+
65+
66+
Alternatively, you can also build locally with:
67+
```shell
68+
git clone https://github.com/project-codeflare/codeflare.git
69+
pip3 install --upgrade pip
70+
pip3 install .
71+
pip3 install -r requirements.txt
72+
```
73+
4874
#### Using Docker
4975

5076
You can try CodeFlare by running the docker image from [Docker Hub](https://hub.docker.com/r/projectcodeflare/codeflare/tags):
@@ -66,8 +92,6 @@ It should produce an output similar to the one below, where you can then find th
6692
or http://127.0.0.1:8888/?token=<token>
6793
```
6894

69-
Once the notebook is loaded, you can find a selection of examples in `codeflare/notebooks`, which can be executed directly from Jupyter environment. As a first example, we recommend the `sample_pipeline.ipynb` notebook.
70-
7195
#### Using Binder service
7296

7397
You can try out some of CodeFlare features using the My Binder service.
@@ -76,45 +100,35 @@ Click on a link below to try CodeFlare, on a sandbox environment, without having
76100

77101
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/project-codeflare/codeflare.git/main)
78102

79-
#### Instaling locally
103+
## Pipeline execution and scaling
80104

81-
CodeFlare can be installed from PyPI.
105+
<p align="center">
106+
<img src="./images/pipelines.svg" width="296" height="180">
107+
</p>
82108

83-
Prerequisites:
84-
* [Python 3.8+](https://www.python.org/downloads/)
85-
* [Jupyter Lab](https://www.python.org/downloads/) *(to run examples)*
109+
CodeFlare Pipelines reimagined pipelines to provide a more intuitive API for the data scientist to create AI/ML pipelines, data workflows, pre-processing, post-processing tasks, and many more which can scale from a laptop to a cluster seamlessly.
86110

87-
We recommend installing Python 3.8.7 using
88-
[pyenv](https://github.com/pyenv/pyenv).
111+
The API documentation can be found [here](https://codeflare.readthedocs.io/en/latest/codeflare.pipelines.html), and reference examples [here](https://codeflare.readthedocs.io/en/latest).
89112

113+
Examples are provided as execuatble notebooks here: [noteboks](./notebooks).
90114

91-
Install from PyPI:
92-
```bash
93-
pip3 install --upgrade codeflare
94-
```
95-
96-
97-
Alternatively, you can also build locally with:
115+
Examples can be run with locally with:
98116
```shell
99-
git clone https://github.com/project-codeflare/codeflare.git
100-
pip3 install --upgrade pip
101-
pip3 install .
102-
pip3 install -r requirements.txt
117+
jupyter-lab notebooks/<example_notebook>
103118
```
104119

105-
Run the sample pipelines with:
106-
```shell
107-
jupyter-lab notebooks/<example_notabook>
108-
```
120+
If running with the container image, examples are found in `codeflare/notebooks`, which can be executed directly from Jupyter environment.
109121

110-
The pipeline will use `ray.init()` to start a local Ray cluster. See [configuring Ray](https://docs.ray.io/en/master/configure.html) to ensure you are able to run a Ray cluster locally.
122+
As a first example, we recommend the `sample_pipeline.ipynb` notebook.
111123

112-
### Scale in the cloud
124+
The pipeline will use `ray.init()` to start a local Ray cluster. See [configuring Ray](https://docs.ray.io/en/master/configure.html) to ensure you are able to run a Ray cluster locally.
113125

114-
Unleash the power of pipelines by seamlessly scaling on the cloud. CodeFlare can be deployed with IBM Cloud Code Engine, a fully managed, serverless platform that runs your containerized workloads.
126+
## Deploy and integrate anywhere
115127

116-
Go to [CodeFlare on IBM Code Engine](./deploy/ibm_cloud_code_engine) for detailed instructions on how to run CodeFlare at scale.
128+
Unleash the power of pipelines by seamlessly scaling on the cloud. CodeFlare can be deployed on any Kubernetes-based platform, including [IBM Cloud Code Engine](https://www.ibm.com/cloud/code-engine) and [Red Hat Open Shift Container Platform](https://www.openshift.com).
117129

130+
- [IBM Cloud Code Engine](./deploy/ibm_cloud_code_engine) for detailed instructions on how to run CodeFlare on a serverless platform.
131+
- [Red Hat OpenShift](./deploy/redhat_openshift) for detailed instructions on how to run CodeFlare on Open Shift Container Platform.
118132

119133
## Contributing
120134

deploy/ibm_cloud_code_engine/README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -90,10 +90,12 @@ kubectl -n $NAMESPACE port-forward <ray-cluster-name> 8899:8899
9090
Set up access to Jupyter notebook:
9191
< how to obtain the token >
9292

93-
93+
<!-->
9494
To access Ray dashboard, do:
9595

96-
9796
In your browser, go to:
97+
-->
98+
99+
Once in a Jupyer envrionment, refer to [notebooks](../../notebooks) for example pipeline. Documentation for reference use cases can be found in [Examples](https://codeflare.readthedocs.io/en/latest/).
98100

99101

deploy/ibm_cloud_code_engine/example-cluster.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ available_node_types:
7777
containers:
7878
- name: ray-node
7979
imagePullPolicy: Always
80-
image: rayproject/ray:nightly
80+
image: projectcodeflare/codeflare:latest
8181
command: ["/bin/bash", "-c", "--"]
8282
args: ["trap : TERM INT; sleep infinity & wait;"]
8383
# This volume allocates shared memory for Ray to use for its plasma
@@ -134,7 +134,7 @@ available_node_types:
134134
containers:
135135
- name: ray-node
136136
imagePullPolicy: Always
137-
image: rayproject/ray:nightly
137+
image: projectcodeflare/codeflare:latest
138138
# Do not change this command - it keeps the pod alive until it is
139139
# explicitly killed.
140140
command: ["/bin/bash", "-c", "--"]

deploy/redhat_openshift/README.md

Lines changed: 165 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,165 @@
1+
# CodeFlare on OpenShift Container Platform (OCP)
2+
3+
A few installation deployment targets are provided below.
4+
5+
- [Ray Cluster Using Operator on Openshift](#Openshift-Ray-Cluster-Operator)
6+
- [Ray Cluster on Openshift](#Openshift-Cluster)
7+
- [Ray Cluster on Openshift for Jupyter](#Jupyter)
8+
9+
## Openshift Ray Cluster Operator
10+
11+
Deploying the [Ray Operator](https://docs.ray.io/en/master/cluster/kubernetes.html?highlight=operator#the-ray-kubernetes-operator)
12+
13+
## Openshift Cluster
14+
15+
### Dispatch Ray Cluster on Openshift
16+
17+
#### Pre-req
18+
- Access to openshift cluster
19+
- Python 3.8+
20+
21+
We recommend installing Python 3.8.7 using
22+
[pyenv](https://github.com/pyenv/pyenv).
23+
24+
<p>&nbsp;</p>
25+
26+
#### Setup
27+
28+
29+
1. Install CodeFlare
30+
31+
Install from PyPI:
32+
```bash
33+
pip3 install --upgrade codeflare
34+
```
35+
36+
Alternatively, you can also build locally with:
37+
```shell
38+
git clone https://github.com/project-codeflare/codeflare.git
39+
pip3 install --upgrade pip
40+
pip3 install .
41+
pip3 install -r requirements.txt
42+
```
43+
44+
<p>&nbsp;</p>
45+
46+
2. Create Cluster (https://docs.ray.io/en/master/cluster/cloud.html#kubernetes)
47+
48+
Assuming openshift cluster access from pre-reqs.
49+
50+
a) Create namespace
51+
52+
```
53+
$ oc create namespace codefalre
54+
namespace/codeflare created
55+
$
56+
```
57+
58+
b) Bring up Ray cluster
59+
60+
```
61+
$ ray up ray/python/ray/autoscaler/kubernetes/example-full.yaml
62+
Cluster: default
63+
64+
Checking Kubernetes environment settings
65+
2021-02-09 06:40:09,612 INFO config.py:169 -- KubernetesNodeProvider: using existing namespace 'ray'
66+
2021-02-09 06:40:09,671 INFO config.py:202 -- KubernetesNodeProvider: autoscaler_service_account 'autoscaler' not found, attempting to create it
67+
2021-02-09 06:40:09,738 INFO config.py:204 -- KubernetesNodeProvider: successfully created autoscaler_service_account 'autoscaler'
68+
2021-02-09 06:40:10,196 INFO config.py:228 -- KubernetesNodeProvider: autoscaler_role 'autoscaler' not found, attempting to create it
69+
2021-02-09 06:40:10,265 INFO config.py:230 -- KubernetesNodeProvider: successfully created autoscaler_role 'autoscaler'
70+
2021-02-09 06:40:10,573 INFO config.py:261 -- KubernetesNodeProvider: autoscaler_role_binding 'autoscaler' not found, attempting to create it
71+
2021-02-09 06:40:10,646 INFO config.py:263 -- KubernetesNodeProvider: successfully created autoscaler_role_binding 'autoscaler'
72+
2021-02-09 06:40:10,704 INFO config.py:294 -- KubernetesNodeProvider: service 'ray-head' not found, attempting to create it
73+
2021-02-09 06:40:10,788 INFO config.py:296 -- KubernetesNodeProvider: successfully created service 'ray-head'
74+
2021-02-09 06:40:11,098 INFO config.py:294 -- KubernetesNodeProvider: service 'ray-workers' not found, attempting to create it
75+
2021-02-09 06:40:11,185 INFO config.py:296 -- KubernetesNodeProvider: successfully created service 'ray-workers'
76+
No head node found. Launching a new cluster. Confirm [y/N]: y
77+
78+
Acquiring an up-to-date head node
79+
2021-02-09 06:40:14,396 INFO node_provider.py:113 -- KubernetesNodeProvider: calling create_namespaced_pod (count=1).
80+
Launched a new head node
81+
Fetching the new head node
82+
83+
<1/1> Setting up head node
84+
Prepared bootstrap config
85+
New status: waiting-for-ssh
86+
[1/7] Waiting for SSH to become available
87+
Running `uptime` as a test.
88+
2021-02-09 06:40:15,296 INFO command_runner.py:171 -- NodeUpdater: ray-head-ql46b: Running kubectl -n ray exec -it ray-head-ql46b -- bash --login -c -i 'true && source ~/.bashrc && export OMP_NUM_THREADS=1 PYTHONWARNINGS=ignore && (uptime)'
89+
error: unable to upgrade connection: container not found ("ray-node")
90+
SSH still not available (Exit Status 1): kubectl -n ray exec -it ray-head-ql46b -- bash --login -c -i 'true && source ~/.bashrc && export OMP_NUM_THREADS=1 PYTHONWARNINGS=ignore && (uptime)', retrying in 5 seconds.
91+
2021-02-09 06:40:22,197 INFO command_runner.py:171 -- NodeUpdater: ray-head-ql46b: Running kubectl -n ray exec -it ray-head-ql46b -- bash --login -c -i 'true && source ~/.bashrc && export OMP_NUM_THREADS=1 PYTHONWARNINGS=ignore && (uptime)'
92+
93+
03:41:41 up 81 days, 14:25, 0 users, load average: 1.42, 0.87, 0.63
94+
Success.
95+
Updating cluster configuration. [hash=16487b5e0285fc46d5f1fd6da0370b2f489a6e5f]
96+
New status: syncing-files
97+
[2/7] Processing file mounts
98+
2021-02-09 06:41:42,330 INFO command_runner.py:171 -- NodeUpdater: ray-head-ql46b: Running kubectl -n ray exec -it ray-head-ql46b -- bash --login -c -i 'true && source ~/.bashrc && export OMP_NUM_THREADS=1 PYTHONWARNINGS=ignore && (mkdir -p ~)'
99+
[3/7] No worker file mounts to sync
100+
New status: setting-up
101+
[4/7] No initialization commands to run.
102+
[5/7] Initalizing command runner
103+
[6/7] No setup commands to run.
104+
[7/7] Starting the Ray runtime
105+
2021-02-09 06:42:10,643 INFO command_runner.py:171 -- NodeUpdater: ray-head-ql46b: Running kubectl -n ray exec -it ray-head-ql46b -- bash --login -c -i 'true && source ~/.bashrc && export OMP_NUM_THREADS=1 PYTHONWARNINGS=ignore && (export RAY_OVERRIDE_RESOURCES='"'"'{"CPU":1,"GPU":0}'"'"';ray stop)'
106+
Did not find any active Ray processes.
107+
2021-02-09 06:42:13,845 INFO command_runner.py:171 -- NodeUpdater: ray-head-ql46b: Running kubectl -n ray exec -it ray-head-ql46b -- bash --login -c -i 'true && source ~/.bashrc && export OMP_NUM_THREADS=1 PYTHONWARNINGS=ignore && (export RAY_OVERRIDE_RESOURCES='"'"'{"CPU":1,"GPU":0}'"'"';ulimit -n 65536; ray start --head --num-cpus=$MY_CPU_REQUEST --port=6379 --object-manager-port=8076 --autoscaling-config=~/ray_bootstrap_config.yaml --dashboard-host 0.0.0.0)'
108+
Local node IP: 172.30.236.163
109+
2021-02-09 03:42:17,373 INFO services.py:1195 -- View the Ray dashboard at http://172.30.236.163:8265
110+
111+
--------------------
112+
Ray runtime started.
113+
--------------------
114+
115+
Next steps
116+
To connect to this Ray runtime from another node, run
117+
ray start --address='172.30.236.163:6379' --redis-password='5241590000000000'
118+
119+
Alternatively, use the following Python code:
120+
import ray
121+
ray.init(address='auto', _redis_password='5241590000000000')
122+
123+
If connection fails, check your firewall settings and network configuration.
124+
125+
To terminate the Ray runtime, run
126+
ray stop
127+
New status: up-to-date
128+
129+
Useful commands
130+
Monitor autoscaling with
131+
ray exec /Users/darroyo/git_workspaces/github.com/ray-project/ray/python/ray/autoscaler/kubernetes/example-full.yaml 'tail -n 100 -f /tmp/ray/session_latest/logs/monitor*'
132+
Connect to a terminal on the cluster head:
133+
ray attach /Users/darroyo/git_workspaces/github.com/ray-project/ray/python/ray/autoscaler/kubernetes/example-full.yaml
134+
Get a remote shell to the cluster manually:
135+
kubectl -n ray exec -it ray-head-ql46b -- bash
136+
```
137+
<p>&nbsp;</p>
138+
139+
3. Verify
140+
a) Check for head node
141+
142+
```
143+
$ oc get pods
144+
NAME READY STATUS RESTARTS AGE
145+
ray-head-ql46b 1/1 Running 0 118m
146+
$
147+
```
148+
b) Run example test
149+
150+
```
151+
ray submit python/ray/autoscaler/kubernetes/example-full.yaml x.py
152+
Loaded cached provider configuration
153+
If you experience issues with the cloud provider, try re-running the command with --no-config-cache.
154+
2021-02-09 08:50:51,028 INFO command_runner.py:171 -- NodeUpdater: ray-head-ql46b: Running kubectl -n ray exec -it ray-head-ql46b -- bash --login -c -i 'true && source ~/.bashrc && export OMP_NUM_THREADS=1 PYTHONWARNINGS=ignore && (python ~/x.py)'
155+
2021-02-09 05:52:10,538 INFO worker.py:655 -- Connecting to existing Ray cluster at address: 172.30.236.163:6379
156+
[0, 1, 4, 9]
157+
```
158+
159+
### Jupyter
160+
161+
Jupyter setup demo [Reference repository](https://github.com/erikerlandson/ray-odh-demo)
162+
163+
### Running examples
164+
165+
Once in a Jupyer envrionment, refer to [notebooks](../../notebooks) for example pipeline. Documentation for reference use cases can be found in [Examples](https://codeflare.readthedocs.io/en/latest/).

docs/source/getting_started/installation.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -63,10 +63,10 @@ We recommend installing Python 3.8.7 using
6363
[pyenv](https://github.com/pyenv/pyenv).
6464

6565

66-
Install from PyPI:
67-
```bash
68-
pip3 install --upgrade codeflare
69-
```
66+
Install from PyPI:
67+
```bash
68+
pip3 install --upgrade codeflare
69+
```
7070

7171

7272
Alternatively, you can also build locally with:

0 commit comments

Comments
 (0)