Skip to content

Commit

Permalink
Update colab examples to point to version/2 of universal sentence enc…
Browse files Browse the repository at this point in the history
…oders.

PiperOrigin-RevId: 199840656
  • Loading branch information
TensorFlow Hub Authors authored and andresusanopinto committed Jun 8, 2018
1 parent 6860621 commit 2f776d3
Show file tree
Hide file tree
Showing 4 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion docs/tutorials/text_classification_with_tf_hub.ipynb
Expand Up @@ -470,7 +470,7 @@
"1. **Regression on sentiment**: we used a classifier to assign each example into a polarity class. But we actually have another categorical feature at our disposal - sentiment. Here classes actually represent a scale and the underlying value (positive/negative) could be well mapped into a continuous range. We could make use of this property by computing a regression ([DNN Regressor](https://www.tensorflow.org/api_docs/python/tf/contrib/learn/DNNRegressor)) instead of a classification ([DNN Classifier](https://www.tensorflow.org/api_docs/python/tf/contrib/learn/DNNClassifier)).\n",
"2. **Larger module**: for the purposes of this tutorial we used a small module to restrict the memory use. There are modules with larger vocabularies and larger embedding space that could give additional accuracy points.\n",
"3. **Parameter tuning**: we can improve the accuracy by tuning the meta-parameters like the learning rate or the number of steps, especially if we use a different module. A validation set is very important if we want to get any reasonable results, because it is very easy to set-up a model that learns to predict the training data without generalizing well to the test set.\n",
"4. **More complex model**: we used a module that computes a sentence embedding by embedding each individual word and then combining them with average. One could also use a sequential module (e.g. [Universal Sentence Encoder](https://tfhub.dev/google/universal-sentence-encoder/1) module) to better capture the nature of sentences. Or an ensemble of two or more TF-Hub modules.\n",
"4. **More complex model**: we used a module that computes a sentence embedding by embedding each individual word and then combining them with average. One could also use a sequential module (e.g. [Universal Sentence Encoder](https://tfhub.dev/google/universal-sentence-encoder/2) module) to better capture the nature of sentences. Or an ensemble of two or more TF-Hub modules.\n",
"5. **Regularization**: to prevent overfitting, we could try to use an optimizer that does some sort of regularization, for example [Proximal Adagrad Optimizer](https://www.tensorflow.org/api_docs/python/tf/train/ProximalAdagradOptimizer).\n"
]
},
Expand Down
2 changes: 1 addition & 1 deletion examples/README.md
Expand Up @@ -9,7 +9,7 @@ Shows how to solve a problem on Kaggle with TF-Hub.

#### [`colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb`](colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb)

Explores text semantic similarity with the [Universal Encoder Module](https://tfhub.dev/google/universal-sentence-encoder/1).
Explores text semantic similarity with the [Universal Encoder Module](https://tfhub.dev/google/universal-sentence-encoder/2).


#### [`colab/tf_hub_generative_image_module.ipynb`](colab/tf_hub_generative_image_module.ipynb)
Expand Down
Expand Up @@ -160,7 +160,7 @@
},
"outputs": [],
"source": [
"module_url = \"https://tfhub.dev/google/universal-sentence-encoder/1\" #@param [\"https://tfhub.dev/google/universal-sentence-encoder/1\", \"https://tfhub.dev/google/universal-sentence-encoder-large/1\"]"
"module_url = \"https://tfhub.dev/google/universal-sentence-encoder/2\" #@param [\"https://tfhub.dev/google/universal-sentence-encoder/2\", \"https://tfhub.dev/google/universal-sentence-encoder-large/2\"]"
]
},
{
Expand Down
Expand Up @@ -69,7 +69,7 @@
"id": "j0HuiScHQ3OK"
},
"source": [
"This Colab illustrates how to use the Universal Sentence Encoder-Lite for sentence similarity task. This module is very similar to [Universal Sentence Encoder](https://www.tensorflow.org/hub/modules/google/universal-sentence-encoder/1) with the only difference that you need to run [SentencePiece](https://github.com/google/sentencepiece) processing on your input sentences.\n",
"This Colab illustrates how to use the Universal Sentence Encoder-Lite for sentence similarity task. This module is very similar to [Universal Sentence Encoder](https://www.tensorflow.org/hub/modules/google/universal-sentence-encoder/2) with the only difference that you need to run [SentencePiece](https://github.com/google/sentencepiece) processing on your input sentences.\n",
"\n",
"The Universal Sentence Encoder makes getting sentence level embeddings as easy as it has historically been to lookup the embeddings for individual words. The sentence embeddings can then be trivially used to compute sentence level meaning similarity as well as to enable better performance on downstream classification tasks using less supervised training data."
]
Expand Down Expand Up @@ -172,7 +172,7 @@
},
"outputs": [],
"source": [
"module = hub.Module(\"https://tfhub.dev/google/universal-sentence-encoder-lite/1\")"
"module = hub.Module(\"https://tfhub.dev/google/universal-sentence-encoder-lite/2\")"
]
},
{
Expand Down

0 comments on commit 2f776d3

Please sign in to comment.