Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fea(examples): Updated the READMEs with requirements.txt installation #1000

Merged
merged 1 commit into from
May 24, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions examples/audio-classification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,12 @@ The following examples showcase how to fine-tune `Wav2Vec2` for audio classifica

Speech recognition models that have been pretrained in an unsupervised fashion on audio data alone, *e.g.* [Wav2Vec2](https://huggingface.co/transformers/main/model_doc/wav2vec2.html), have shown to require only very little annotated data to yield good performance on speech classification datasets.

## Requirements

First, you should install the requirements:
```bash
pip install -r requirements.txt
```

## Single-HPU

Expand Down
7 changes: 7 additions & 0 deletions examples/contrastive-image-text/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,13 @@ This folder contains two examples:

Such models can be used for natural language image search and potentially zero-shot image classification.

## Requirements

First, you should install the requirements:
```bash
pip install -r requirements.txt
```

## Download COCO dataset (2017)
This example uses COCO dataset (2017) through a custom dataset script, which requires users to manually download the
COCO dataset before training.
Expand Down
7 changes: 7 additions & 0 deletions examples/image-classification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,13 @@ limitations under the License.
This directory contains a script that showcases how to fine-tune any model supported by the [`AutoModelForImageClassification` API](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForImageClassification) (such as [ViT](https://huggingface.co/docs/transformers/main/en/model_doc/vit) or [Swin Transformer](https://huggingface.co/docs/transformers/main/en/model_doc/swin)) on HPUs. They can be used to fine-tune models on both [datasets from the hub](#using-datasets-from-hub) as well as on [your own custom data](#using-your-own-data).


## Requirements

First, you should install the requirements:
```bash
pip install -r requirements.txt
```

## Single-HPU training

### Using datasets from Hub
Expand Down
7 changes: 7 additions & 0 deletions examples/language-modeling/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,13 @@ GPT-2 is trained or fine-tuned using a causal language modeling (CLM) loss while
The following examples will run on datasets hosted on our [hub](https://huggingface.co/datasets) or with your own
text files for training and validation. We give examples of both below.

## Requirements

First, you should install the requirements:
```bash
pip install -r requirements.txt
```

## GPT2/GPT-J/GPT-NeoX and causal language modeling

The following examples fine-tune GPT-2, GPT-J-6B and GPT-NeoX-20B on WikiText-2. We're using the raw WikiText-2 (no tokens were replaced before the tokenization). The loss here is the one of causal language modeling.
Expand Down
7 changes: 7 additions & 0 deletions examples/protein-folding/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,13 @@ The predicted protein structure will be stored in save-hpu.pdb file. We can use

# Mila-Intel protST example

## Requirements

First, you should install the requirements:
```bash
pip install -r requirements.txt
```

## Single-HPU inference for zero shot evaluation
Here we show how to run zero shot evaluation of protein ST model on HPU:

Expand Down
7 changes: 7 additions & 0 deletions examples/question-answering/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,13 @@ uses special features of those tokenizers. You can check if your favorite model

Note that if your dataset contains samples with no possible answers (like SQUAD version 2), you need to pass along the flag `--version_2_with_negative`.

## Requirements

First, you should install the requirements:
```bash
pip install -r requirements.txt
```

## Fine-tuning BERT on SQuAD1.1

For the following cases, an example of a Gaudi configuration file is given
Expand Down
7 changes: 7 additions & 0 deletions examples/speech-recognition/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,13 @@ limitations under the License.
- [Inference](#single-hpu-seq2seq-inference)


## Requirements

First, you should install the requirements:
```bash
pip install -r requirements.txt
```

## Connectionist Temporal Classification

The script [`run_speech_recognition_ctc.py`](https://github.com/huggingface/optimum-habana/tree/main/examples/speech-recognition/run_speech_recognition_ctc.py) can be used to fine-tune any pretrained [Connectionist Temporal Classification Model](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForCTC) for automatic speech recognition on one of the [official speech recognition datasets](https://huggingface.co/datasets?task_ids=task_ids:automatic-speech-recognition) or a custom dataset.
Expand Down
7 changes: 7 additions & 0 deletions examples/summarization/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,13 @@ This directory contains examples for finetuning and evaluating transformers on s
For custom datasets in `jsonlines` format please see: https://huggingface.co/docs/datasets/loading_datasets#json-files.
You will also find examples of these below.

## Requirements

First, you should install the requirements:
```bash
pip install -r requirements.txt
```

## Single-card Training

Here is an example of a summarization task with T5:
Expand Down
6 changes: 6 additions & 0 deletions examples/text-classification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,12 @@ and can also be used for a dataset hosted on our [hub](https://huggingface.co/da

GLUE is made up of a total of 9 different tasks where the task name can be cola, sst2, mrpc, stsb, qqp, mnli, qnli, rte or wnli.

## Requirements

First, you should install the requirements:
```bash
pip install -r requirements.txt
```

## Fine-tuning BERT on MRPC

Expand Down
7 changes: 7 additions & 0 deletions examples/text-to-speech/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,13 @@ limitations under the License.

This directory contains a script that showcases how to use the Transformers pipeline API to run text to speech task on HPUs.

## Requirements

First, you should install the requirements:
```bash
pip install -r requirements.txt
```

## Single-HPU inference

```bash
Expand Down
6 changes: 6 additions & 0 deletions examples/translation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,12 @@ limitations under the License.
For custom datasets in `jsonlines` format please see: https://huggingface.co/docs/datasets/loading_datasets#json-files.
You will also find examples of these below.

## Requirements

First, you should install the requirements:
```bash
pip install -r requirements.txt
```

## Single-card Training

Expand Down
7 changes: 3 additions & 4 deletions examples/trl/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
# Examples


## Prerequisites

Install all the dependencies in the `requirements.txt`:
## Requirements

First, you should install the requirements:
```
$ pip install -U -r requirements.txt
```
Expand Down Expand Up @@ -266,4 +265,4 @@ results = pipeline(prompts)

for prompt, image in zip(prompts, results.images):
image.save(f"{prompt}.png")
```
```
Loading