Skip to content

Commit

Permalink
added trademarks to MLCommons, MLPerf and MLCube
Browse files Browse the repository at this point in the history
  • Loading branch information
gfursin committed May 14, 2021
1 parent 6e7b03c commit 267f136
Show file tree
Hide file tree
Showing 47 changed files with 100 additions and 100 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -314,7 +314,7 @@ W:
N: Gavin Simpson
E:
O:
C: MLPerf workflows
C: MLPerf™ workflows
W:
```
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Windows: [![Windows Build status](https://ci.appveyor.com/api/projects/status/iw
## News

* [Project website](https://cKnowledge.org)
* [CK-powered MLPerf benchmark automation](https://github.com/ctuning/ck/blob/master/docs/mlperf-automation/README.md)
* [CK-powered MLPerf™ benchmark automation](https://github.com/ctuning/ck/blob/master/docs/mlperf-automation/README.md)

## Overview

Expand All @@ -32,11 +32,11 @@ Our goal is to help researchers and practitioners share, reuse and extend their
in the form of portable workflows, automation actions and reusable artifacts with a common API, CLI,
and meta description. See how CK helps to automate benchmarking, optimization and design space
exploration of [AI/ML/software/hardware stacks](https://cknowledge.io/result/crowd-benchmarking-mlperf-inference-classification-mobilenets-all/),
simplifies [MLPerf](https://mlperf.org) submissions and supports collaborative, reproducible and reusable ML Systems research:
simplifies [MLPerf™](https://mlperf.org) submissions and supports collaborative, reproducible and reusable ML Systems research:

* [ACM TechTalk](https://www.youtube.com/watch?v=7zpeIVwICa4)
* [AI/ML/MLPerf automation workflows and components from the community](https://github.com/ctuning/ck-ml);
* [Real-world use cases](https://cKnowledge.org/partners.html) from MLPerf, Arm, General Motors, IBM, the Raspberry Pi foundation, ACM and other great partners;
* [AI/ML/MLPerf™ automation workflows and components from the community](https://github.com/ctuning/ck-ml);
* [Real-world use cases](https://cKnowledge.org/partners.html) from MLPerf™, Arm, General Motors, IBM, the Raspberry Pi foundation, ACM and other great partners;
* [Reddit discussion about reproducing 150 papers](https://www.reddit.com/r/MachineLearning/comments/ioq8do/n_reproducing_150_research_papers_the_problems);
* Our reproducibility initiatives: [methodology](https://cTuning.org/ae), [checklist](https://ctuning.org/ae/submission_extra.html), [events](https://cKnowledge.io/events).

Expand Down Expand Up @@ -107,7 +107,7 @@ You can now view this image with detected corners.

Check [CK docs](https://ck.readthedocs.io/en/latest/src/introduction.html) for further details.

### MLPerf benchmark workflows
### MLPerf™ benchmark workflows

* [Image classification](https://github.com/ctuning/ck/blob/master/docs/mlperf-automation/tasks/task-image-classification.md)
* [Object detection](https://github.com/ctuning/ck/blob/master/docs/mlperf-automation/tasks/task-object-detection.md)
Expand All @@ -129,7 +129,7 @@ ck run docker:ck-template-mlperf-x8664-ubuntu-20.04
### Portable workflow example with virtual CK environments

You can create multiple [virtual CK environments](https://github.com/octoml/venv) with templates
to automatically install different CK packages and workflows, for example for MLPerf inference:
to automatically install different CK packages and workflows, for example for MLPerf™ inference:

```
ck pull repo:octoml@venv
Expand Down
2 changes: 1 addition & 1 deletion ck/repo/module/mlperf.inference/.cm/meta.json
Original file line number Diff line number Diff line change
Expand Up @@ -291,7 +291,7 @@
}
]
},
"desc": "Raw data (JSON) access for MLPerf Inference visualization widget",
"desc": "Raw data (JSON) access for MLPerf(tm) Inference visualization widget",
"prefilter_config": {
"image_classification_multistream": {
"score_columns": [
Expand Down
2 changes: 1 addition & 1 deletion ck/repo/module/mlperf.mobilenets/.cm/meta.json
Original file line number Diff line number Diff line change
Expand Up @@ -333,7 +333,7 @@
}
]
},
"desc": "Raw data (JSON) access for MLPerf MobilNets visualization widget",
"desc": "Raw data (JSON) access for MLPerf(tm) MobilNets visualization widget",
"platform_config": {
"BLA-L09": {
"gpu": "Mali-G72 MP12",
Expand Down
4 changes: 2 additions & 2 deletions ck/repo/module/mlperf/.cm/meta.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,14 @@
"desc": "Compare two experiments"
},
"list_experiments": {
"desc": "List all MLPerf-related experiments (search by tags)"
"desc": "List all MLPerf(tm)-related experiments (search by tags)"
},
"pick_an_experiment": {
"desc": "Select an experiment from the list (an interactive helper function)"
}
},
"copyright": "See CK COPYRIGHT.txt for copyright details",
"desc": "Validate MLPerf submission against reference",
"desc": "Validate MLPerf(tm) submission against reference",
"developer": "Leo Gordon; Anton Lokhmotov",
"developer_email": "leo@dividiti.com; anton@dividiti.com",
"developer_webpage": "http://dividiti.com",
Expand Down
2 changes: 1 addition & 1 deletion ck/repo/module/mlperf/module.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
#
# Collective Knowledge - common functionality for MLPerf benchmarking.
# Collective Knowledge - common functionality for MLPerf(tm) benchmarking.
#
# See CK LICENSE.txt for licensing details.
# See CK COPYRIGHT.txt for copyright details.
Expand Down
2 changes: 1 addition & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ in the form of reusable artifacts and portable workflows with a common API, CLI,
and JSON meta description.

See how CK helps to support collaborative and reproducible AI, ML, and systems R&D
in some real-world use cases from Arm, General Motors, IBM, MLPerf, the Raspberry Pi foundation,
in some real-world use cases from Arm, General Motors, IBM, MLPerf(tm), the Raspberry Pi foundation,
and ACM: https://cKnowledge.org/partners.

.. toctree::
Expand Down
14 changes: 7 additions & 7 deletions docs/mlperf-automation/README.md
Original file line number Diff line number Diff line change
@@ -1,29 +1,29 @@
# MLPerf inference benchmark automation guide

This document is prepared by the [MLCommons community](https://mlcommons.org)
to make it easier to reproduce MLPerf benchmark results and automate new submissions
This document is prepared by the [MLCommons™ community](https://mlcommons.org)
to make it easier to reproduce MLPerf™ benchmark results and automate new submissions
using the open-source [CK workflow framework](https://github.com/ctuning/ck).

* [Prepare your platform](platform/README.md)
* [Install CK framework](tools/ck.md)
* [Install CK virtual environment (optional)](tools/ck-venv.md)
* [Use adaptive CK container](tools/ck-docker.md)
* [**Prepare and run native MLPerf**](tasks/README.md)
* [**Prepare and run native MLPerf™**](tasks/README.md)
* [Analyze MLPerf inference results](results/README.md)
* [Example of CK dashboards for ML Systems DSE](results/ck-dashboard.md)
* [Reproduce MLPerf results and DSE](reproduce/README.md)
* [Reproduce MLPerf™ results and DSE](reproduce/README.md)
* [Test models with a webcam](reproduce/demo-webcam-object-detection-x86-64.md)
* [Explore ML Systems designs](dse/README.md)
* [Submit to MLPerf](submit/README.md)
* [Submit to MLPerf™](submit/README.md)
* [Related tools](tools/README.md)
* Further improvements:
* [Standardization of MLPerf workflows](tbd/standardization.md)
* [Standardization of MLPerf™ workflows](tbd/standardization.md)
* [More automation](tbd/automation.md)
* [CK2 ideas](tbd/ck2.md)

*Please feel free to contribute to this collaborative doc by sending your PR [here]( https://github.com/ctuning/ck/pulls )! Thank you!*

# Feedback

Contact [Grigori Fursin](https://cKnowledge.io/@gfursin) ([OctoML.ai](https://octoml.ai), [MLCommons member](https://mlcommons.org))
Contact [Grigori Fursin](https://cKnowledge.io/@gfursin) ([OctoML.ai](https://octoml.ai), [MLCommons™ member](https://mlcommons.org))

2 changes: 1 addition & 1 deletion docs/mlperf-automation/datasets/imagenet2012.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ ck detect soft:compiler.gcc --full_path=`which gcc`

### Install common CK packages

You will need cmake to build MLPerf loadgen. First, attempt to detect if you already have it installed:
You will need cmake to build MLPerf™ loadgen. First, attempt to detect if you already have it installed:
```
ck detect soft --tags=tool,cmake
```
Expand Down
2 changes: 1 addition & 1 deletion docs/mlperf-automation/inference/notes.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Misc MLPerf inference notes
# Misc MLPerf™ inference notes

## [20210430]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ I am trying to reproduce the resnet50 benchmark result shows on [mlperf.org](htt
psyhtest commented:


To obtain the TensorRT plans, we followed NVIDIA's [instructions](https://github.com/mlcommons/inference_results_v0.5/tree/master/closed/NVIDIA) from their MLPerf Inference v0.5 submission. This happened between the v0.5 and v0.7 submission rounds, i.e. between October 2019 - September 2020. we submitted the results to v0.7 (without DLA support).
To obtain the TensorRT plans, we followed NVIDIA's [instructions](https://github.com/mlcommons/inference_results_v0.5/tree/master/closed/NVIDIA) from their MLPerf™ Inference v0.5 submission. This happened between the v0.5 and v0.7 submission rounds, i.e. between October 2019 - September 2020. we submitted the results to v0.7 (without DLA support).

After v0.7, we [reproduced](https://github.com/ctuning/ck-ml/blob/master/jnotebook/mlperf-inference-v0.7-reproduce-xavier/) some of the results with JetPack 4.5, while [resolving](https://github.com/mlcommons/inference_results_v0.7/issues/15) a few issues along the way. We did not create CK packages for the new plans though.

Expand Down
2 changes: 1 addition & 1 deletion docs/mlperf-automation/platform/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Prepare your platform for MLPerf automation using the following guidelines:
Prepare your platform for MLPerf™ automation using the following guidelines:

* [x8664 (Ubuntu)](x8664-ubuntu.md)
* [Raspberry Pi4 (Ubuntu)](rpi4-ubuntu.md)
Expand Down
2 changes: 1 addition & 1 deletion docs/mlperf-automation/platform/nvidia-generic.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
and build several python versions - it worked fine.

* [20210421] Grigori managed to build the latest loadgen
from [MLCommons inference](https://github.com/mlcommons/inference/tree/master/loadgen)
from [MLCommons™ inference](https://github.com/mlcommons/inference/tree/master/loadgen)
(both python version and static library).


Expand Down
2 changes: 1 addition & 1 deletion docs/mlperf-automation/platform/nvidia-jetson-nano.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
and build several python versions - it worked fine.

* [20210420] Grigori managed to build the latest loadgen
from [MLCommons inference](https://github.com/mlcommons/inference/tree/master/loadgen)
from [MLCommons™ inference](https://github.com/mlcommons/inference/tree/master/loadgen)
(both python version and static library).


Expand Down
4 changes: 2 additions & 2 deletions docs/mlperf-automation/platform/rpi4-coral-ubuntu.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ sudo apt-get install libedgetpu1-std

3. Disconnect and connect again Coral Edge TPU.

4. To get maximum performance for MLPerf, install a package with maximum operational frequency:
4. To get maximum performance for MLPerf™, install a package with maximum operational frequency:
```
sudo apt-get install libedgetpu1-max
Expand Down Expand Up @@ -52,7 +52,7 @@ ck detect soft:compiler.python --full_path=$CK_VENV_PYTHON_BIN
ck run program:test-coral-edge-tpu-installation
```

* [20210428] Grigori tested MLPerf inference with libedgetpu1-max v15.0:
* [20210428] Grigori tested MLPerf™ inference with libedgetpu1-max v15.0:


## Misc
Expand Down
2 changes: 1 addition & 1 deletion docs/mlperf-automation/platform/rpi4-debian.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
and build several python versions - it worked fine.

* [20210423] Grigori attempted to build loadgen
from [MLCommons inference](https://github.com/mlcommons/inference/tree/master/loadgen)
from [MLCommons™ inference](https://github.com/mlcommons/inference/tree/master/loadgen)
but it fails during C++ compliation.

The earlier revision of loadgen (r0.5?) works - this means that it's possible
Expand Down
2 changes: 1 addition & 1 deletion docs/mlperf-automation/platform/rpi4-ubuntu.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
and build several python versions - it worked fine.

* [20210423] Grigori managed to build the latest loadgen
from [MLCommons inference](https://github.com/mlcommons/inference/tree/master/loadgen)
from [MLCommons™ inference](https://github.com/mlcommons/inference/tree/master/loadgen)
(both python version and static library).


Expand Down
2 changes: 1 addition & 1 deletion docs/mlperf-automation/platform/x8664-ubuntu.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
and build several python versions - it worked fine.

* [20210421] Grigori managed to build the latest loadgen
from [MLCommons inference](https://github.com/mlcommons/inference/tree/master/loadgen)
from [MLCommons™ inference](https://github.com/mlcommons/inference/tree/master/loadgen)
(both python version and static library).


Expand Down
10 changes: 5 additions & 5 deletions docs/mlperf-automation/reproduce/README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
# Reproduce MLPerf benchmark
# Reproduce MLPerf™ benchmark

## Using ad-hoc MLCommons scripts
## Using ad-hoc MLCommons™ scripts

* [Dell EMC System intefernce v0.7](https://infohub.delltechnologies.com/p/running-the-mlperf-inference-v0-7-benchmark-on-dell-emc-systems)
* [NVidia Jetson Xavier](reproduce/image-classification-nvidia-jetson-xavier-mlperf.md)

## Using CK workflows

* [Official MLCommons notes for image classification (a bit outdated - more automation exists)](https://github.com/mlcommons/inference/tree/master/vision/classification_and_detection/optional_harness_ck/classification)
* [Official MLCommons notes for object detection (a bit outdated - more automations exists)](https://github.com/mlcommons/inference/tree/master/vision/classification_and_detection/optional_harness_ck/detection)
* [Official MLCommons™ notes for image classification (a bit outdated - more automation exists)](https://github.com/mlcommons/inference/tree/master/vision/classification_and_detection/optional_harness_ck/classification)
* [Official MLCommons™ notes for object detection (a bit outdated - more automations exists)](https://github.com/mlcommons/inference/tree/master/vision/classification_and_detection/optional_harness_ck/detection)

## Using CK adaptive containers (to be tested!)

* [MLPerf workflows](https://cknowledge.io/?q=module_uoa%3A%22docker%22+AND+%22mlperf%22)
* [MLPerf™ workflows](https://cknowledge.io/?q=module_uoa%3A%22docker%22+AND+%22mlperf%22)

* [CK image classification](https://cknowledge.io/?q=module_uoa%3A%22docker%22+AND+%22image-classification%22)
* [CK object detection](https://cknowledge.io/?q=module_uoa%3A%22docker%22+AND+%22object-detection%22)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

Install system packages for [Nvidia jetson Nano](../platform/nvidia-jetson-nano.md).

# MLPerf Inference v1.0 - Image Classification - TFLite 2.4.1
# MLPerf™ Inference v1.0 - Image Classification - TFLite 2.4.1

We currently run TFLite only on Arm-based CPU.
Please use the same instructions as for [RPi4](ck-image-classification-rpi4-tflite.md).
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
* Platform: Raspberry Pi 4
* OS: Ubuntu 20.04 64-bit

# MLPerf Inference v1.0 - Image Classification - TFLite 2.4.1
# MLPerf™ Inference v1.0 - Image Classification - TFLite 2.4.1

## System packages

Expand Down Expand Up @@ -43,7 +43,7 @@ ck detect soft:compiler.gcc --full_path=`which gcc`
## Install common CK packages
You will need cmake to build MLPerf loadgen. First, attempt to detect if you already have it installed:
You will need cmake to build MLPerf™ loadgen. First, attempt to detect if you already have it installed:
```
ck detect soft --tags=tool,cmake
```
Expand Down Expand Up @@ -178,7 +178,7 @@ time ck benchmark program:image-classification-tflite \
```
## Run MLPerf benchmark
## Run MLPerf™ benchmark
### Accuracy: Single Stream (50 samples)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Adaptive CK container for MLPerf Inference v1.0 - Image Classification - TFLite 2.4.1 with RUY
# Adaptive CK container for MLPerf™ Inference v1.0 - Image Classification - TFLite 2.4.1 with RUY

## Install Collective Knowledge (CK) with CK MLOps repo

Expand All @@ -7,7 +7,7 @@ python3 -m pip install ck
ck pull repo:octoml@mlops
```

## Build Docker container with CK components and a small ImageNet (500 images) to test MLPerf workflows
## Build Docker container with CK components and a small ImageNet (500 images) to test MLPerf™ workflows

```
ck build docker:ck-mlperf-inference-v1.0-image-classification-small-imagenet-fcbc9a7708491791-x8664-ubuntu-20.04
Expand Down Expand Up @@ -55,7 +55,7 @@ ck add repo:ck-experiment --quiet
```

You can then run the script "ck-object-detection-x86-64-docker-start.sh" from this directory
to pass the path to this repo to the container, run it, benchmark MLPerf model, and record
to pass the path to this repo to the container, run it, benchmark MLPerf™ model, and record
experiments to the above ck-experiment repository:

```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
* Platform: x86 64
* OS: Ubuntu 18.04 64-bit

# MLPerf Inference v1.0 - Image Classification - TFLite 2.4.1 (x86)
# MLPerf™ Inference v1.0 - Image Classification - TFLite 2.4.1 (x86)

## System packages

Expand Down Expand Up @@ -43,7 +43,7 @@ ck detect soft:compiler.gcc --full_path=`which gcc`

## Install common CK packages

You will need cmake to build MLPerf loadgen. First, attempt to detect if you already have it installed:
You will need cmake to build MLPerf™ loadgen. First, attempt to detect if you already have it installed:
```
ck detect soft --tags=tool,cmake
```
Expand Down Expand Up @@ -176,7 +176,7 @@ time ck benchmark program:image-classification-tflite \
```

## Run MLPerf benchmark
## Run MLPerf™ benchmark

### Accuracy: Single Stream (50 samples)

Expand Down Expand Up @@ -278,7 +278,7 @@ time ck benchmark program:image-classification-tflite-loadgen \

See [CK-ML repo docs](https://github.com/ctuning/ck-ml/blob/main/program/image-classification-tflite-loadgen/README.md)

You can list available CK MLPerf model packages as follows:
You can list available CK MLPerf™ model packages as follows:
```
ck ls ck-ml:package:model-*mlperf* | sort
```
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
***Reproduced by [Grigori Fursin](https://cKnowledge.io/@gfursin) on 20210501***

# MLPerf Inference v1.0 - Object Detection - TFLite (with Coral EdgeTPU support)
# MLPerf™ Inference v1.0 - Object Detection - TFLite (with Coral EdgeTPU support)

* Platform: RPi4 with Coral EdgeTPU
* OS: Ubuntu 20.04 64-but
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
***Reproduced by [Grigori Fursin](https://cKnowledge.io/@gfursin) on 20210428***

# MLPerf Inference v1.0 - Object Detection - TFLite
# MLPerf™ Inference v1.0 - Object Detection - TFLite

* Platform: RPi4
* OS: Ubuntu 20.04 64-but
Expand Down

0 comments on commit 267f136

Please sign in to comment.