Skip to content

Commit

Permalink
docs: made suggested changes
Browse files Browse the repository at this point in the history
  • Loading branch information
lmmilliken committed Dec 2, 2022
1 parent e599545 commit f4ef9d7
Show file tree
Hide file tree
Showing 8 changed files with 32 additions and 32 deletions.
6 changes: 3 additions & 3 deletions docs/notebooks/image_to_image.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -315,13 +315,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Before and After\n",
"We can directly compare the results of our fine-tuned model with a its zero-shot counterpart to getter a better idea of how finetuning affects the results of a search. While the differences between the two models may be subtle for some queries, the examples below show that the model after fine-tuning is able to better match images look similar (like the first query), as well as match images that represent the same (or similar) things, despite not looking similar (like the second query):\n",
"## Before and after\n",
"We can directly compare the results of our fine-tuned model with its zero-shot counterpart to get a better idea of how finetuning affects the results of a search. While the differences between the two models may be subtle for some queries, some of the examples the examples below (such as the the second example) show that the model after fine-tuning is able to better match similar images.\n",
"\n",
"```python\n",
"import copy\n",
"from PIL import Image\n",
"from io import BytesIO\n",
"from PIL import Image\n",
"\n",
"query_pt = copy.deepcopy(query_data)\n",
"index_pt = copy.deepcopy(index_data)\n",
Expand Down
6 changes: 3 additions & 3 deletions docs/notebooks/image_to_image.md
Original file line number Diff line number Diff line change
Expand Up @@ -209,13 +209,13 @@ query.match(index_data, limit=10, metric='cosine')
```

<!-- #region -->
## Before and After
We can directly compare the results of our fine-tuned model with a its zero-shot counterpart to getter a better idea of how finetuning affects the results of a search. While the differences between the two models may be subtle for some queries, the examples below show that the model after fine-tuning is able to better match images look similar (like the first query), as well as match images that represent the same (or similar) things, despite not looking similar (like the second query):
## Before and after
We can directly compare the results of our fine-tuned model with its zero-shot counterpart to get a better idea of how finetuning affects the results of a search. While the differences between the two models may be subtle for some queries, some of the examples the examples below (such as the the second example) show that the model after fine-tuning is able to better match similar images.

```python
import copy
from PIL import Image
from io import BytesIO
from PIL import Image

query_pt = copy.deepcopy(query_data)
index_pt = copy.deepcopy(index_data)
Expand Down
10 changes: 5 additions & 5 deletions docs/notebooks/multilingual_text_to_image.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -304,7 +304,7 @@
"id": "38bc9069-0f0e-47c6-8560-bf77ad200774",
"metadata": {},
"source": [
"## Before and After\n",
"## Before and after\n",
"We can directly compare the results of our fine-tuned model with an untrained multilingual clip model by displaying the matches each model has for the same query, while the differences between the results of the two models are quite subtle for some queries, the examples below clearly show that finetuning increses the quality of the search results:"
]
},
Expand All @@ -323,12 +323,12 @@
"ft_index = copy.deepcopy(index_data)\n",
"\n",
"zero_shot_text_encoder = build_model(\n",
" name = 'xlm-roberta-base-ViT-B-32::laion5b_s13b_b90k',\n",
" select_model = 'clip-text',\n",
" name='xlm-roberta-base-ViT-B-32::laion5b_s13b_b90k',\n",
" select_model='clip-text',\n",
")\n",
"zero_shot_image_encoder = build_model(\n",
" name = 'xlm-roberta-base-ViT-B-32::laion5b_s13b_b90k',\n",
" select_model = 'clip-vision',\n",
" name='xlm-roberta-base-ViT-B-32::laion5b_s13b_b90k',\n",
" select_model='clip-vision',\n",
")\n",
"\n",
"finetuner.encode(model=zero_shot_text_encoder, data=pt_query)\n",
Expand Down
10 changes: 5 additions & 5 deletions docs/notebooks/multilingual_text_to_image.md
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ please use `model = finetuner.get_model(artifact, is_onnx=True)`
```
<!-- #endregion -->

## Before and After
## Before and after
We can directly compare the results of our fine-tuned model with an untrained multilingual clip model by displaying the matches each model has for the same query, while the differences between the results of the two models are quite subtle for some queries, the examples below clearly show that finetuning increses the quality of the search results:

<!-- #region -->
Expand All @@ -205,12 +205,12 @@ ft_query = copy.deepcopy(query_data)
ft_index = copy.deepcopy(index_data)

zero_shot_text_encoder = build_model(
name = 'xlm-roberta-base-ViT-B-32::laion5b_s13b_b90k',
select_model = 'clip-text',
name='xlm-roberta-base-ViT-B-32::laion5b_s13b_b90k',
select_model='clip-text',
)
zero_shot_image_encoder = build_model(
name = 'xlm-roberta-base-ViT-B-32::laion5b_s13b_b90k',
select_model = 'clip-vision',
name='xlm-roberta-base-ViT-B-32::laion5b_s13b_b90k',
select_model='clip-vision',
)

finetuner.encode(model=zero_shot_text_encoder, data=pt_query)
Expand Down
12 changes: 6 additions & 6 deletions docs/notebooks/text_to_image.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -305,8 +305,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Before and After\n",
"We can directly compare the results of our fine-tuned model with an untrained clip model by displaying the matches each model has for the same query, while the differences between the results of the two models are quite subtle for some queries, the examples below clearly show that finetuning increses the quality of the search results:"
"## Before and after\n",
"We can directly compare the results of our fine-tuned model with a pre-trained clip model by displaying the matches each model has for the same query. While the differences between the results of the two models are quite subtle for some queries, the examples below clearly show that finetuning increases the quality of the search results:"
]
},
{
Expand All @@ -324,12 +324,12 @@
"ft_index = copy.deepcopy(index_data)\n",
"\n",
"zero_shot_text_encoder = build_model(\n",
" name = 'openai/clip-vit-base-patch32',\n",
" select_model = 'clip-text',\n",
" name='openai/clip-vit-base-patch32',\n",
" select_model='clip-text',\n",
")\n",
"zero_shot_image_encoder = build_model(\n",
" name = 'openai/clip-vit-base-patch32',\n",
" select_model = 'clip-vision',\n",
" name='openai/clip-vit-base-patch32',\n",
" select_model='clip-vision',\n",
")\n",
"\n",
"finetuner.encode(model=zero_shot_text_encoder, data=pt_query)\n",
Expand Down
12 changes: 6 additions & 6 deletions docs/notebooks/text_to_image.md
Original file line number Diff line number Diff line change
Expand Up @@ -206,8 +206,8 @@ please use `model = finetuner.get_model(artifact, is_onnx=True)`
```
<!-- #endregion -->

## Before and After
We can directly compare the results of our fine-tuned model with an untrained clip model by displaying the matches each model has for the same query, while the differences between the results of the two models are quite subtle for some queries, the examples below clearly show that finetuning increses the quality of the search results:
## Before and after
We can directly compare the results of our fine-tuned model with a pre-trained clip model by displaying the matches each model has for the same query. While the differences between the results of the two models are quite subtle for some queries, the examples below clearly show that finetuning increases the quality of the search results:

<!-- #region -->
```python
Expand All @@ -221,12 +221,12 @@ ft_query = copy.deepcopy(query_data)
ft_index = copy.deepcopy(index_data)

zero_shot_text_encoder = build_model(
name = 'openai/clip-vit-base-patch32',
select_model = 'clip-text',
name='openai/clip-vit-base-patch32',
select_model='clip-text',
)
zero_shot_image_encoder = build_model(
name = 'openai/clip-vit-base-patch32',
select_model = 'clip-vision',
name='openai/clip-vit-base-patch32',
select_model='clip-vision',
)

finetuner.encode(model=zero_shot_text_encoder, data=pt_query)
Expand Down
4 changes: 2 additions & 2 deletions docs/notebooks/text_to_text.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -338,8 +338,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Before and After\n",
"We can directly compare the results of our fine-tuned model with a its zero-shot counterpart to getter a better idea of how finetuning affects the results of a search. While the zero-shot model is able to produce results that are very similar to the initial query, it is common for the topic of the question to change, with the structure staying the same. After fine-tuning, the returned questions are consistently relevant to the initial query, even in cases where the structure of the sentence is different.\n",
"## Before and after\n",
"We can directly compare the results of our fine-tuned model with its zero-shot counterpart to get a better idea of how finetuning affects the results of a search. While the zero-shot model is able to produce results that are very similar to the initial query, it is common for the topic of the question to change, with the structure staying the same. After fine-tuning, the returned questions are consistently relevant to the initial query, even in cases where the structure of the sentence is different.\n",
"\n",
"```python\n",
"import copy\n",
Expand Down
4 changes: 2 additions & 2 deletions docs/notebooks/text_to_text.md
Original file line number Diff line number Diff line change
Expand Up @@ -227,8 +227,8 @@ query.match(index_data, limit=10, metric='cosine')
```

<!-- #region -->
## Before and After
We can directly compare the results of our fine-tuned model with a its zero-shot counterpart to getter a better idea of how finetuning affects the results of a search. While the zero-shot model is able to produce results that are very similar to the initial query, it is common for the topic of the question to change, with the structure staying the same. After fine-tuning, the returned questions are consistently relevant to the initial query, even in cases where the structure of the sentence is different.
## Before and after
We can directly compare the results of our fine-tuned model with its zero-shot counterpart to get a better idea of how finetuning affects the results of a search. While the zero-shot model is able to produce results that are very similar to the initial query, it is common for the topic of the question to change, with the structure staying the same. After fine-tuning, the returned questions are consistently relevant to the initial query, even in cases where the structure of the sentence is different.

```python
import copy
Expand Down

0 comments on commit f4ef9d7

Please sign in to comment.