diff --git a/docs/source/tasks/asr.mdx b/docs/source/tasks/asr.mdx index 862c2cd44781..ce9db3c9dd08 100644 --- a/docs/source/tasks/asr.mdx +++ b/docs/source/tasks/asr.mdx @@ -171,7 +171,7 @@ Load Wav2Vec2 with [`AutoModelForCTC`]. For `ctc_loss_reduction`, it is often be -If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)! +If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)! diff --git a/docs/source/tasks/audio_classification.mdx b/docs/source/tasks/audio_classification.mdx index fbdc2b36932c..63c3c7bd6b66 100644 --- a/docs/source/tasks/audio_classification.mdx +++ b/docs/source/tasks/audio_classification.mdx @@ -106,7 +106,7 @@ Load Wav2Vec2 with [`AutoModelForAudioClassification`]. Specify the number of la -If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)! +If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)! diff --git a/docs/source/tasks/image_classification.mdx b/docs/source/tasks/image_classification.mdx index 5be72780896b..ae85493c0150 100644 --- a/docs/source/tasks/image_classification.mdx +++ b/docs/source/tasks/image_classification.mdx @@ -126,7 +126,7 @@ Load ViT with [`AutoModelForImageClassification`]. Specify the number of labels, -If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)! +If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)! diff --git a/docs/source/tasks/language_modeling.mdx b/docs/source/tasks/language_modeling.mdx index 9f6813ac051b..458b4cb3d36e 100644 --- a/docs/source/tasks/language_modeling.mdx +++ b/docs/source/tasks/language_modeling.mdx @@ -212,7 +212,7 @@ Load DistilGPT2 with [`AutoModelForCausalLM`]: -If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)! +If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)! @@ -247,7 +247,7 @@ To fine-tune a model in TensorFlow is just as easy, with only a few differences. -If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](training#finetune-with-keras)! +If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](../training#finetune-with-keras)! @@ -317,7 +317,7 @@ Load DistilRoBERTa with [`AutoModelForMaskedlM`]: -If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)! +If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)! @@ -353,7 +353,7 @@ To fine-tune a model in TensorFlow is just as easy, with only a few differences. -If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](training#finetune-with-keras)! +If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](../training#finetune-with-keras)! diff --git a/docs/source/tasks/multiple_choice.mdx b/docs/source/tasks/multiple_choice.mdx index 3e7101ab2922..6b2d08be531b 100644 --- a/docs/source/tasks/multiple_choice.mdx +++ b/docs/source/tasks/multiple_choice.mdx @@ -188,7 +188,7 @@ Load BERT with [`AutoModelForMultipleChoice`]: -If you aren't familiar with fine-tuning a model with Trainer, take a look at the basic tutorial [here](training#finetune-with-trainer)! +If you aren't familiar with fine-tuning a model with Trainer, take a look at the basic tutorial [here](../training#finetune-with-trainer)! @@ -227,7 +227,7 @@ To fine-tune a model in TensorFlow is just as easy, with only a few differences. -If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](training#finetune-with-keras)! +If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](../training#finetune-with-keras)! diff --git a/docs/source/tasks/question_answering.mdx b/docs/source/tasks/question_answering.mdx index 4b9ce42efede..1c2160db0e40 100644 --- a/docs/source/tasks/question_answering.mdx +++ b/docs/source/tasks/question_answering.mdx @@ -163,7 +163,7 @@ Load DistilBERT with [`AutoModelForQuestionAnswering`]: -If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)! +If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)! @@ -202,7 +202,7 @@ To fine-tune a model in TensorFlow is just as easy, with only a few differences. -If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](training#finetune-with-keras)! +If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](../training#finetune-with-keras)! diff --git a/docs/source/tasks/sequence_classification.mdx b/docs/source/tasks/sequence_classification.mdx index 6062a233f2df..63db0d7f6107 100644 --- a/docs/source/tasks/sequence_classification.mdx +++ b/docs/source/tasks/sequence_classification.mdx @@ -103,7 +103,7 @@ Load DistilBERT with [`AutoModelForSequenceClassification`] along with the numbe -If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)! +If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)! @@ -147,21 +147,21 @@ To fine-tune a model in TensorFlow is just as easy, with only a few differences. -If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](training#finetune-with-keras)! +If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](../training#finetune-with-keras)! Convert your datasets to the `tf.data.Dataset` format with [`to_tf_dataset`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.to_tf_dataset). Specify inputs and labels in `columns`, whether to shuffle the dataset order, batch size, and the data collator: ```py ->>> tf_train_dataset = tokenized_imdb["train"].to_tf_dataset( +>>> tf_train_set = tokenized_imdb["train"].to_tf_dataset( ... columns=["attention_mask", "input_ids", "label"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) ->>> tf_validation_dataset = tokenized_imdb["train"].to_tf_dataset( +>>> tf_validation_set = tokenized_imdb["test"].to_tf_dataset( ... columns=["attention_mask", "input_ids", "label"], ... shuffle=False, ... batch_size=16, diff --git a/docs/source/tasks/summarization.mdx b/docs/source/tasks/summarization.mdx index 0c5bbbad3d95..a5e1bc4e0acc 100644 --- a/docs/source/tasks/summarization.mdx +++ b/docs/source/tasks/summarization.mdx @@ -122,7 +122,7 @@ Load T5 with [`AutoModelForSeq2SeqLM`]: -If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)! +If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)! @@ -163,7 +163,7 @@ To fine-tune a model in TensorFlow is just as easy, with only a few differences. -If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](training#finetune-with-keras)! +If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](../training#finetune-with-keras)! diff --git a/docs/source/tasks/token_classification.mdx b/docs/source/tasks/token_classification.mdx index 033c52853f5c..37b316e6529c 100644 --- a/docs/source/tasks/token_classification.mdx +++ b/docs/source/tasks/token_classification.mdx @@ -163,7 +163,7 @@ Load DistilBERT with [`AutoModelForTokenClassification`] along with the number o -If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)! +If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)! @@ -202,7 +202,7 @@ To fine-tune a model in TensorFlow is just as easy, with only a few differences. -If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](training#finetune-with-keras)! +If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](../training#finetune-with-keras)! diff --git a/docs/source/tasks/translation.mdx b/docs/source/tasks/translation.mdx index ee3af67dda4d..d4a2eae424c1 100644 --- a/docs/source/tasks/translation.mdx +++ b/docs/source/tasks/translation.mdx @@ -124,7 +124,7 @@ Load T5 with [`AutoModelForSeq2SeqLM`]: -If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)! +If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)! @@ -165,7 +165,7 @@ To fine-tune a model in TensorFlow is just as easy, with only a few differences. -If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](training#finetune-with-keras)! +If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](../training#finetune-with-keras)!