Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .github/dockerhub-readmes.json
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,10 @@
{
"fname": "./preset/README.md",
"repo-name": "intel/inference-optimization"
},
{
"fname": "./workflows/README.md",
"repo-name": "intel/ai-workflows"
}
]
}
13 changes: 10 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/8270/badge)](https://www.bestpractices.dev/projects/8270)
[![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/intel/ai-containers/badge)](https://securityscorecards.dev/viewer/?uri=github.com/intel/ai-containers)
[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Fintel%2Fai-containers.svg?type=shield&issueType=license)](https://app.fossa.com/projects/git%2Bgithub.com%2Fintel%2Fai-containers?ref=badge_shield&issueType=license)
[![CodeQL](https://github.com/intel/ai-containers/actions/workflows/github-code-scanning/codeql/badge.svg)](https://github.com/intel/ai-containers/actions/workflows/github-code-scanning/codeql)
[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/intel/ai-containers/main.svg)](https://results.pre-commit.ci/latest/github/intel/ai-containers/main)
[![CodeQL](https://github.com/intel/ai-containers/actions/workflows/github-code-scanning/codeql/badge.svg)](https://github.com/intel/ai-containers/actions/workflows/github-code-scanning/codeql)
[![Lint](https://github.com/intel/ai-containers/actions/workflows/lint.yaml/badge.svg)](https://github.com/intel/ai-containers/actions/workflows/lint.yaml)
[![Test Runner CI](https://github.com/intel/ai-containers/actions/workflows/test-runner-ci.yaml/badge.svg)](https://github.com/intel/ai-containers/actions/workflows/test-runner-ci.yaml)
[![Helm Chart CI](https://github.com/intel/ai-containers/actions/workflows/chart-ci.yaml/badge.svg)](https://github.com/intel/ai-containers/actions/workflows/chart-ci.yaml)
[![Weekly Tests](https://github.com/intel/ai-containers/actions/workflows/weekly-test.yaml/badge.svg)](https://github.com/intel/ai-containers/actions/workflows/weekly-test.yaml)
Expand All @@ -17,14 +17,18 @@ Define your project's registry and repository each time you use the project:

```bash
# REGISTRY/REPO:TAG
export CACHE_REGISTRY=<cache_registry_name>
export REGISTRY=<registry_name>
export REPO=<repo_name>
```

The maintainers of Intel® AI Containers use [harbor](https://github.com/goharbor/harbor) to store containers.

> [!NOTE]
> `REGISTRY` and `REPO` are used to authenticate with the private registry necessary to push completed container layers and saved them for testing and publication. For example: `REGISTRY=intel && REPO=intel-extension-for-pytorch` would become `intel/intel-extension-for-pytorch` as the name of the container image, followed by the tag generated from the service found in that project's compose file.

> [!WARNING]
> You can optionally skip this step and use some placeholder values, however some container groups depend on other images and will pull from a registry that you have not defined and result in an error.

### Set Up Docker Engine

You'll need to install Docker Engine on your development system. Note that while **Docker Engine** is free to use, **Docker Desktop** may require you to purchase a license. See the [Docker Engine Server installation instructions](https://docs.docker.com/engine/install/#server) for details.
Expand Down Expand Up @@ -61,6 +65,9 @@ cd pytorch
PACKAGE_OPTION=idp docker compose up --build ipex-base
```

> [!NOTE]
> If you didn't specify `REGISTRY` or `REPO`, you also need to add the `idp` service to the list to build the dependent python image.

## Test Containers

To test the containers, use the [Test Runner Framework](https://github.com/intel/ai-containers/tree/main/test-runner):
Expand Down
3 changes: 3 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,9 @@ mkdocs serve --no-livereload

The documentation will be available at [http://localhost:8000](http://localhost:8000).

> [!CAUTION]
> Docker compose `v2.26.1` is the minimum required version.

## Labelling Containers

To customize the tables in the [Support Matrix](./matrix.md), you can add labels to the services found in each container group's `docker-compose.yaml` file. The command `docker compose config` is run to get all of the metadata from the container group. Labels are used to specify the public metadata for each container, and then the tables are generated based on the `.actions.json` file found in the same directory.
Expand Down
18 changes: 8 additions & 10 deletions docs/matrix.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# Support Matrix

=== "Framework Containers"
=== "Python"
=== "[Python](https://hub.docker.com/r/intel/python)"
{{ read_csv('assets/python.csv') }}
=== "Classical ML"
=== "[Classical ML](https://hub.docker.com/r/intel/intel-optimized-ml)"
{{ read_csv('assets/classical-ml.csv') }}
=== "PyTorch"
=== "[PyTorch](https://hub.docker.com/r/intel/intel-optimized-pytorch)"
{{ read_csv('assets/pytorch.csv') }}
=== "TensorFlow"
=== "[TensorFlow](https://hub.docker.com/r/intel/intel-optimized-tensorflow)"
{{ read_csv('assets/tensorflow.csv') }}

=== "Model Containers"
Expand All @@ -22,7 +22,7 @@
=== "Max"
{{ read_csv('assets/max-pvc.csv') }}

=== "Preset Containers"
=== "[Preset Containers](https://github.com/intel/ai-containers/blob/main/preset/README.md)"
=== "Data Analytics"
{{ read_csv('assets/data_analytics.csv') }}
=== "Classical ML"
Expand All @@ -32,10 +32,8 @@
=== "Inference Optimization"
{{ read_csv('assets/inference_optimization.csv') }}

=== "Other"
=== "Serving"
=== "[Workflows](https://hub.docker.com/r/intel/ai-workflows)"
=== "[TorchServe](https://github.com/intel/ai-containers/tree/main/workflows/charts/torchserve)"
{{ read_csv('assets/serving.csv') }}
=== "Transformers"
{{ read_csv('assets/transformers.csv') }}
=== "GenAI"
=== "[Huggingface LLM](https://github.com/intel/ai-containers/tree/main/workflows/charts/huggingface-llm)"
{{ read_csv('assets/genai.csv') }}
3 changes: 1 addition & 2 deletions docs/scripts/hook.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,7 @@ def create_support_matrix():
# compose_to_csv("preset/classical-ml", "classical_ml")
# compose_to_csv("preset/deep-learning", "deep_learning")
# compose_to_csv("preset/inference-optimization", "inference_optimization")
# get_repo(transformers)
# get_repo(genai)
compose_to_csv("workflows/charts/huggingface-llm", "genai")


def on_pre_build(*args, **kwargs):
Expand Down
3 changes: 2 additions & 1 deletion docs/scripts/matrix.py
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,8 @@ def make_table(setting: str = None, compose_metadata: dict = None):
metadata = extract_labels(setting)
df = pd.concat([df, make_table(setting, metadata)], axis=1)
else:
df = make_table()
metadata = docker.compose.config(return_json=True)
df = make_table("Name=Version", metadata)

os.chdir(root)
df.loc[:, ~df.columns.duplicated()].to_csv(f"docs/assets/{path}.csv", index=False)
2 changes: 1 addition & 1 deletion docs/scripts/readmes.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,8 @@
"preset/README.md",
"python/README.md",
"pytorch/README.md",
"pytorch/serving/README.md",
"tensorflow/README.md",
"workflows/README.md",
]


Expand Down
8 changes: 1 addition & 7 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@
# limitations under the License.

copyright: Copyright &copy; 2024 Intel Corporation
edit_uri: edit/main/docs/
extra:
generator: false
extra_javascript:
Expand All @@ -37,15 +36,11 @@ nav:
- Python Base: 'python/README.md'
- PyTorch Base: 'pytorch/README.md'
- TensorFlow Base: 'tensorflow/README.md'
- TorchServe: 'pytorch/serving/README.md'
- Workflows: 'workflows/README.md'
- Support Matrix: 'matrix.md'
- Roadmap: 'roadmap.md'
plugins:
- callouts
- git-authors
- git-revision-date-localized:
enable_creation_date: true
type: date
# - optimize
- search
- table-reader:
Expand Down Expand Up @@ -73,5 +68,4 @@ theme:
- navigation.prune
- toc.follow
- toc.integrate
- content.action.edit
name: material
13 changes: 13 additions & 0 deletions test-runner/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -277,3 +277,16 @@ See an [Example](../.github/workflows/test-runner-ci.yaml#L94) Implementation of

> [!TIP]
> When writing Tests for use with a CI platform like GitHub Actions, write your tests in such a way that they would be executed from the root directory of your repository.

## Testing

For testing [tests.yaml](tests.yaml) file, some variables need to be set

```bash
export CACHE_REGISTRY=<harbor cache_registry_name>
export REGISTRY=<harbor registry>
export REPO=<harbor project/repo>
# optional
export PERF_REPO=<internal perf repo>
python test-runner/test_runner.py -f test-runner/tests.yaml -a test-runner/.actions.json
```
34 changes: 30 additions & 4 deletions workflows/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,36 @@
# Workflows
# Intel® AI Workflows

This directory contains workflows demonstrating showing how the Intel Optimized base containers can be used for
different use cases:
Demonstrating showing how the [Intel® AI Containers] can be used for different use cases:

## PyTorch Workflows

| Base Container | Device Type | Example | Description |
|----------------|-------------|---------|-------------|
| `intel/intel-optimized-pytorch:2.3.0-pip-multinode` | CPU | [Distributed LLM Fine Tuning with Kubernetes](charts/huggingface-llm) | Demonstrates using Hugging Face Transformers with Intel® Xeon® Scalable Processors to fine tune LLMs with multiple nodes from a Kubernetes cluster. The example includes a LLM fine tuning script, Dockerfile, and Helm chart. |
| `intel/intel-optimized-pytorch:2.3.0-pip-multinode` | CPU | [Distributed LLM Fine Tuning with Kubernetes] | Demonstrates using Hugging Face Transformers with Intel® Xeon® Scalable Processors to fine tune LLMs with multiple nodes from a Kubernetes cluster. The example includes a LLM fine tuning script, Dockerfile, and Helm chart. |
| `intel/intel-optimized-pytorch:2.3.0-serving-cpu` | CPU | [TorchServe* with Kubernetes] | Demonstrates using TorchServe* with Intel® Xeon® Scalable Processors to serve models on multinodes nodes from a Kubernetes cluster. The example includes a Helm chart. |

## Build from Source

To build the images from source, clone the [Intel® AI Containers] repository, follow the main `README.md` file to setup your environment, and run the following command:

```bash
cd workflows/charts/huggingface-llm
docker compose build huggingface-llm
docker compose run huggingface-llm sh -c "python /workspace/scripts/finetune.py --help"
```

## License

View the [License](https://github.com/intel/ai-containers/blob/main/LICENSE) for the [Intel® AI Containers].

The images below also contain other software which may be under other licenses (such as Pytorch*, Jupyter*, Bash, etc. from the base).

It is the image user's responsibility to ensure that any use of The images below comply with any relevant licenses for all software contained within.

\* Other names and brands may be claimed as the property of others.

<!--Below are links used in these document. They are not rendered: -->

[Intel® AI Containers]: https://github.com/intel/ai-containers
[Distributed LLM Fine Tuning with Kubernetes]: https://github.com/intel/ai-containers/tree/main/workflows/charts/huggingface-llm
[TorchServe* with Kubernetes]: https://github.com/intel/ai-containers/tree/main/workflows/charts/torchserve
7 changes: 6 additions & 1 deletion workflows/charts/huggingface-llm/docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,13 @@ services:
BASE_IMAGE_TAG: ${BASE_IMAGE_TAG:-2.3.0-pip-multinode}
context: .
labels:
dependency.apt.google-perftools: true
dependency.apt.libjemalloc: true
dependency.apt.libomp-dev: true
dependency.apt.numactl: true
dependency.python: ${PYTHON_VERSION:-3.10}
dependency.python.pip: requirements.txt
docs: pytorch
docs: genai
org.opencontainers.base.name: "intel/intel-optimized-pytorch:${IPEX_VERSION:-2.3.0}-pip-multinode"
org.opencontainers.image.name: "intel/ai-workflows"
org.opencontainers.image.title: "Intel® Extension for PyTorch with Hugging Face LLM fine tuning"
Expand Down
23 changes: 0 additions & 23 deletions workflows/charts/test/.helmignore

This file was deleted.

42 changes: 0 additions & 42 deletions workflows/charts/test/Chart.yaml

This file was deleted.

30 changes: 0 additions & 30 deletions workflows/charts/test/README.md

This file was deleted.

16 changes: 0 additions & 16 deletions workflows/charts/test/templates/NOTES.txt

This file was deleted.

Loading