Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ Versioning:
#### Entity space

Required:
- model_name: Supported models: `["granite-3b-1.5", "hf-tiny-model-private/tiny-random-BloomForCausalLM", "llama-7b", "granite-13b-v2", "llama-13b", "granite-20b-v2", "granite-7b-base", "granite-8b-japanese", "granite-8b-code-base", "granite-34b-code-base", "mistral-7b-v0.1", "llama3-8b", "llama3-70b", "mixtral-8x7b-instruct-v0.1", "llama2-70b", "llama3.1-8b", "llama3.1-70b", "llama3.1-405b", "granite-3b-code-base-128k", "granite-8b-code-base-128k", "allam-1-13b", "granite-3-8b", "granite-3.1-2b", "granite-3.1-8b-instruct", "mistral-123b-v2", "granite-3.1-3b-a800m-instruct", "granite-vision-3.2-2b", "smollm2-135m"]`
- model_name: Supported models: `["granite-3b-1.5", "hf-tiny-model-private/tiny-random-BloomForCausalLM", "llama-7b", "granite-13b-v2", "llama-13b", "granite-20b-v2", "granite-7b-base", "granite-8b-japanese", "granite-8b-code-base", "granite-34b-code-base", "mistral-7b-v0.1", "llama3-8b", "llama3-70b", "mixtral-8x7b-instruct-v0.1", "llama2-70b", "llama3.1-8b", "llama3.1-70b", "llama3.1-405b", "granite-3b-code-base-128k", "granite-8b-code-base-128k", "allam-1-13b", "granite-3-8b", "granite-3.1-2b", "granite-3.1-8b-instruct", "mistral-123b-v2", "granite-3.1-3b-a800m-instruct", "granite-vision-3.2-2b", "smollm2-135m", "llava-v1.6-mistral-7b"]`
- model_max_length: Maximum sequence length. Sequences will be right padded (and possibly truncated)
- number_gpus: The effective number of GPUs (to be evenly distributed to `number_nodes` machines)
- batch_size: the effective batch_size (will be evenly distributed to max(1, number_gpus) devices)
Expand Down Expand Up @@ -376,7 +376,7 @@ Versioning:

#### Entity space

- model_name: Supported models: `["granite-3b-1.5", "hf-tiny-model-private/tiny-random-BloomForCausalLM", "llama-7b", "granite-13b-v2", "llama-13b", "granite-20b-v2", "granite-7b-base", "granite-8b-japanese", "granite-8b-code-base", "granite-34b-code-base", "mistral-7b-v0.1", "llama3-8b", "llama3-70b", "mixtral-8x7b-instruct-v0.1", "llama2-70b", "llama3.1-8b", "llama3.1-70b", "llama3.1-405b", "granite-3b-code-base-128k", "granite-8b-code-base-128k", "allam-1-13b", "granite-3-8b", "granite-3.1-2b", "granite-3.1-8b-instruct", "mistral-123b-v2", "granite-3.1-3b-a800m-instruct", "granite-vision-3.2-2b", "smollm2-135m"]`
- model_name: Supported models: `["granite-3b-1.5", "hf-tiny-model-private/tiny-random-BloomForCausalLM", "llama-7b", "granite-13b-v2", "llama-13b", "granite-20b-v2", "granite-7b-base", "granite-8b-japanese", "granite-8b-code-base", "granite-34b-code-base", "mistral-7b-v0.1", "llama3-8b", "llama3-70b", "mixtral-8x7b-instruct-v0.1", "llama2-70b", "llama3.1-8b", "llama3.1-70b", "llama3.1-405b", "granite-3b-code-base-128k", "granite-8b-code-base-128k", "allam-1-13b", "granite-3-8b", "granite-3.1-2b", "granite-3.1-8b-instruct", "mistral-123b-v2", "granite-3.1-3b-a800m-instruct", "granite-vision-3.2-2b", "smollm2-135m", "llava-v1.6-mistral-7b"]`
- dataset_id: One of
- `news-chars-512-entries-4096`: 4096 entries with samples of 512 + 127 (prompt) + 512 characters
- `news-chars-1024-entries-4096`: 4096 entries with samples of 1024 + 127 (prompt) + 1024 characters
Expand Down Expand Up @@ -567,7 +567,7 @@ Versioning:
#### Entity space

Required:
- model_name: Supported models: `["granite-3b-1.5", "hf-tiny-model-private/tiny-random-BloomForCausalLM", "llama-7b", "granite-13b-v2", "llama-13b", "granite-20b-v2", "granite-7b-base", "granite-8b-japanese", "granite-8b-code-base", "granite-34b-code-base", "mistral-7b-v0.1", "llama3-8b", "llama3-70b", "mixtral-8x7b-instruct-v0.1", "llama2-70b", "llama3.1-8b", "llama3.1-70b", "llama3.1-405b", "granite-3b-code-base-128k", "granite-8b-code-base-128k", "allam-1-13b", "granite-3-8b", "granite-3.1-2b", "granite-3.1-8b-instruct", "mistral-123b-v2", "granite-3.1-3b-a800m-instruct", "granite-vision-3.2-2b", "smollm2-135m"]`
- model_name: Supported models: `["granite-3b-1.5", "hf-tiny-model-private/tiny-random-BloomForCausalLM", "llama-7b", "granite-13b-v2", "llama-13b", "granite-20b-v2", "granite-7b-base", "granite-8b-japanese", "granite-8b-code-base", "granite-34b-code-base", "mistral-7b-v0.1", "llama3-8b", "llama3-70b", "mixtral-8x7b-instruct-v0.1", "llama2-70b", "llama3.1-8b", "llama3.1-70b", "llama3.1-405b", "granite-3b-code-base-128k", "granite-8b-code-base-128k", "allam-1-13b", "granite-3-8b", "granite-3.1-2b", "granite-3.1-8b-instruct", "mistral-123b-v2", "granite-3.1-3b-a800m-instruct", "granite-vision-3.2-2b", "smollm2-135m", "llava-v1.6-mistral-7b"]`
- model_max_length: Maximum sequence length. Sequences will be right padded (and possibly truncated)
- number_gpus: The effective number of GPUs (to be evenly distributed to `number_nodes` machines)
- batch_size: the effective batch_size (will be evenly distributed to max(1, number_gpus) devices)
Expand Down Expand Up @@ -643,6 +643,7 @@ Sets the `--target_modules` layer names based on the `model_name`:
- `granite-3-8b`: `["q_proj", "v_proj"]`
- `granite-3.1-2b`: `["q_proj", "v_proj"]`
- `granite-3.1-8b-instruct`: `["q_proj", "v_proj"]`
- `llava-v1.6-mistral-7b`: `["q_proj", "v_proj"]`

> **NOTE**: Because running `accelerate` with a single gpu is unsupported, when setting `number_gpus` to 1 this experiment actually runs the `tuning.sft_trainer` script directly (i.e. a DataParallel (DP) run).

Expand Down Expand Up @@ -1103,7 +1104,7 @@ Versioning:
#### Entity space

Required:
- model_name: Supported models: `["granite-3b-1.5", "hf-tiny-model-private/tiny-random-BloomForCausalLM", "llama-7b", "granite-13b-v2", "llama-13b", "granite-20b-v2", "granite-7b-base", "granite-8b-japanese", "granite-8b-code-base", "granite-34b-code-base", "mistral-7b-v0.1", "llama3-8b", "llama3-70b", "mixtral-8x7b-instruct-v0.1", "llama2-70b", "llama3.1-8b", "llama3.1-70b", "llama3.1-405b", "granite-3b-code-base-128k", "granite-8b-code-base-128k", "allam-1-13b", "granite-3-8b", "granite-3.1-2b", "granite-3.1-8b-instruct", "mistral-123b-v2", "granite-3.1-3b-a800m-instruct", "granite-vision-3.2-2b", "smollm2-135m"]`
- model_name: Supported models: `["granite-3b-1.5", "hf-tiny-model-private/tiny-random-BloomForCausalLM", "llama-7b", "granite-13b-v2", "llama-13b", "granite-20b-v2", "granite-7b-base", "granite-8b-japanese", "granite-8b-code-base", "granite-34b-code-base", "mistral-7b-v0.1", "llama3-8b", "llama3-70b", "mixtral-8x7b-instruct-v0.1", "llama2-70b", "llama3.1-8b", "llama3.1-70b", "llama3.1-405b", "granite-3b-code-base-128k", "granite-8b-code-base-128k", "allam-1-13b", "granite-3-8b", "granite-3.1-2b", "granite-3.1-8b-instruct", "mistral-123b-v2", "granite-3.1-3b-a800m-instruct", "granite-vision-3.2-2b", "smollm2-135m", "llava-hf/llava-v1.6-mistral-7b-hf]`
- model_max_length: Maximum sequence length. Sequences will be right padded (and possibly truncated)
- number_gpus: The effective number of GPUs (to be evenly distributed to `number_nodes` machines)
- batch_size: the effective batch_size (will be evenly distributed to max(1, number_gpus) devices)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,3 +67,5 @@ mixtral-8x7b-instruct-v0.1:
Vanilla: /hf-models-pvc/Mixtral-8x7B-Instruct-v0.1/
smollm2-135m:
Vanilla: HuggingFaceTB/SmolLM2-135M
llava-v1.6-mistral-7b:
Vanilla: llava-hf/llava-v1.6-mistral-7b-hf
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@
"granite-3.1-8b-instruct": ["q_proj", "v_proj"],
"granite-3.1-3b-a800m-instruct": ["q_proj", "v_proj"],
"granite-vision-3.2-2b": ["q_proj", "v_proj"],
"llava-v1.6-mistral-7b": ["q_proj", "v_proj"],
}


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -82,8 +82,8 @@ def get_linear_layers(path_model: str):
# (the version of the fms-hf-tuning Orchestrator Plugin on the ray cluster won't have the changes you just made)

ModelMap: typing.Dict[str, typing.Dict[str, str]] = {
"granite-3.1-8b-instruct": {
"Vanilla": "ibm-granite/granite-3.1-8b-instruct",
"llava-v1.6-mistral-7b": {
"Vanilla": "llava-hf/llava-v1.6-mistral-7b-hf",
}
}

Expand Down
9 changes: 5 additions & 4 deletions website/docs/actuators/sft-trainer.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ Performs full fine-tuning of all model parameters. This experiment is ideal for

Required:

- model_name: Supported models: `["granite-3b-1.5", "hf-tiny-model-private/tiny-random-BloomForCausalLM", "llama-7b", "granite-13b-v2", "llama-13b", "granite-20b-v2", "granite-7b-base", "granite-8b-japanese", "granite-8b-code-base", "granite-34b-code-base", "mistral-7b-v0.1", "llama3-8b", "llama3-70b", "mixtral-8x7b-instruct-v0.1", "llama2-70b", "llama3.1-8b", "llama3.1-70b", "llama3.1-405b", "granite-3b-code-base-128k", "granite-8b-code-base-128k", "allam-1-13b", "granite-3-8b", "granite-3.1-2b", "granite-3.1-8b-instruct", "mistral-123b-v2", "granite-3.1-3b-a800m-instruct", "granite-vision-3.2-2b", "smollm2-135m"]`
- model_name: Supported models: `["granite-3b-1.5", "hf-tiny-model-private/tiny-random-BloomForCausalLM", "llama-7b", "granite-13b-v2", "llama-13b", "granite-20b-v2", "granite-7b-base", "granite-8b-japanese", "granite-8b-code-base", "granite-34b-code-base", "mistral-7b-v0.1", "llama3-8b", "llama3-70b", "mixtral-8x7b-instruct-v0.1", "llama2-70b", "llama3.1-8b", "llama3.1-70b", "llama3.1-405b", "granite-3b-code-base-128k", "granite-8b-code-base-128k", "allam-1-13b", "granite-3-8b", "granite-3.1-2b", "granite-3.1-8b-instruct", "mistral-123b-v2", "granite-3.1-3b-a800m-instruct", "granite-vision-3.2-2b", "smollm2-135m", "llava-v1.6-mistral-7b"]`
- model_max_length: Maximum sequence length. Sequences will be right padded (and possibly truncated)
- number_gpus: The effective number of GPUs (to be evenly distributed to `number_nodes` machines)
- batch_size: the effective batch_size (will be evenly distributed to max(1, number_gpus) devices)
Expand Down Expand Up @@ -319,7 +319,7 @@ Runs full fine-tuning five times and reports the proportion of tasks that fail d

Required:

- model_name: Supported models: `["granite-3b-1.5", "hf-tiny-model-private/tiny-random-BloomForCausalLM", "llama-7b", "granite-13b-v2", "llama-13b", "granite-20b-v2", "granite-7b-base", "granite-8b-japanese", "granite-8b-code-base", "granite-34b-code-base", "mistral-7b-v0.1", "llama3-8b", "llama3-70b", "mixtral-8x7b-instruct-v0.1", "llama2-70b", "llama3.1-8b", "llama3.1-70b", "llama3.1-405b", "granite-3b-code-base-128k", "granite-8b-code-base-128k", "allam-1-13b", "granite-3-8b", "granite-3.1-2b", "granite-3.1-8b-instruct", "mistral-123b-v2", "granite-3.1-3b-a800m-instruct", "granite-vision-3.2-2b", "smollm2-135m"]`
- model_name: Supported models: `["granite-3b-1.5", "hf-tiny-model-private/tiny-random-BloomForCausalLM", "llama-7b", "granite-13b-v2", "llama-13b", "granite-20b-v2", "granite-7b-base", "granite-8b-japanese", "granite-8b-code-base", "granite-34b-code-base", "mistral-7b-v0.1", "llama3-8b", "llama3-70b", "mixtral-8x7b-instruct-v0.1", "llama2-70b", "llama3.1-8b", "llama3.1-70b", "llama3.1-405b", "granite-3b-code-base-128k", "granite-8b-code-base-128k", "allam-1-13b", "granite-3-8b", "granite-3.1-2b", "granite-3.1-8b-instruct", "mistral-123b-v2", "granite-3.1-3b-a800m-instruct", "granite-vision-3.2-2b", "smollm2-135m", "llava-v1.6-mistral-7b"]`
- model_max_length: Maximum sequence length. Sequences will be right padded (and possibly truncated)
- number_gpus: The effective number of GPUs (to be evenly distributed to `number_nodes` machines)
- batch_size: the effective batch_size (will be evenly distributed to max(1, number_gpus) devices)
Expand Down Expand Up @@ -508,7 +508,7 @@ Executes LoRA-based fine-tuning, a parameter-efficient method that adapts only a

Required:

- model_name: Supported models: `["granite-3b-1.5", "hf-tiny-model-private/tiny-random-BloomForCausalLM", "llama-7b", "granite-13b-v2", "llama-13b", "granite-20b-v2", "granite-7b-base", "granite-8b-japanese", "granite-8b-code-base", "granite-34b-code-base", "mistral-7b-v0.1", "llama3-8b", "llama3-70b", "mixtral-8x7b-instruct-v0.1", "llama2-70b", "llama3.1-8b", "llama3.1-70b", "llama3.1-405b", "granite-3b-code-base-128k", "granite-8b-code-base-128k", "allam-1-13b", "granite-3-8b", "granite-3.1-2b", "granite-3.1-8b-instruct", "mistral-123b-v2", "granite-3.1-3b-a800m-instruct", "granite-vision-3.2-2b", "smollm2-135m"]`
- model_name: Supported models: `["granite-3b-1.5", "hf-tiny-model-private/tiny-random-BloomForCausalLM", "llama-7b", "granite-13b-v2", "llama-13b", "granite-20b-v2", "granite-7b-base", "granite-8b-japanese", "granite-8b-code-base", "granite-34b-code-base", "mistral-7b-v0.1", "llama3-8b", "llama3-70b", "mixtral-8x7b-instruct-v0.1", "llama2-70b", "llama3.1-8b", "llama3.1-70b", "llama3.1-405b", "granite-3b-code-base-128k", "granite-8b-code-base-128k", "allam-1-13b", "granite-3-8b", "granite-3.1-2b", "granite-3.1-8b-instruct", "mistral-123b-v2", "granite-3.1-3b-a800m-instruct", "granite-vision-3.2-2b", "smollm2-135m", "llava-v1.6-mistral-7b"]`
- model_max_length: Maximum sequence length. Sequences will be right padded (and possibly truncated)
- number_gpus: The effective number of GPUs (to be evenly distributed to `number_nodes` machines)
- batch_size: the effective batch_size (will be evenly distributed to max(1, number_gpus) devices)
Expand Down Expand Up @@ -585,6 +585,7 @@ Executes LoRA-based fine-tuning, a parameter-efficient method that adapts only a
- `granite-3-8b`: `["q_proj", "v_proj"]`
- `granite-3.1-2b`: `["q_proj", "v_proj"]`
- `granite-3.1-8b-instruct`: `["q_proj", "v_proj"]`
- `llava-v1.6-mistral-7b`: `["q_proj", "v_proj"]`

!!! info end

Expand Down Expand Up @@ -722,7 +723,7 @@ Executes LoRA-based fine-tuning, a parameter-efficient method that adapts only a

Required:

- model_name: Supported models: `["granite-3b-1.5", "hf-tiny-model-private/tiny-random-BloomForCausalLM", "llama-7b", "granite-13b-v2", "llama-13b", "granite-20b-v2", "granite-7b-base", "granite-8b-japanese", "granite-8b-code-base", "granite-34b-code-base", "mistral-7b-v0.1", "llama3-8b", "llama3-70b", "mixtral-8x7b-instruct-v0.1", "llama2-70b", "llama3.1-8b", "llama3.1-70b", "llama3.1-405b", "granite-3b-code-base-128k", "granite-8b-code-base-128k", "allam-1-13b", "granite-3-8b", "granite-3.1-2b", "granite-3.1-8b-instruct", "mistral-123b-v2", "granite-3.1-3b-a800m-instruct", "granite-vision-3.2-2b", "smollm2-135m"]`
- model_name: Supported models: `["granite-3b-1.5", "hf-tiny-model-private/tiny-random-BloomForCausalLM", "llama-7b", "granite-13b-v2", "llama-13b", "granite-20b-v2", "granite-7b-base", "granite-8b-japanese", "granite-8b-code-base", "granite-34b-code-base", "mistral-7b-v0.1", "llama3-8b", "llama3-70b", "mixtral-8x7b-instruct-v0.1", "llama2-70b", "llama3.1-8b", "llama3.1-70b", "llama3.1-405b", "granite-3b-code-base-128k", "granite-8b-code-base-128k", "allam-1-13b", "granite-3-8b", "granite-3.1-2b", "granite-3.1-8b-instruct", "mistral-123b-v2", "granite-3.1-3b-a800m-instruct", "granite-vision-3.2-2b", "smollm2-135m", "llava-v1.6-mistral-7b"]`
- model_max_length: Maximum sequence length. Sequences will be right padded (and possibly truncated)
- number_gpus: The effective number of GPUs (to be evenly distributed to `number_nodes` machines)
- batch_size: the effective batch_size (will be evenly distributed to max(1, number_gpus) devices)
Expand Down