From a3b33590f72213c9106c5e8938ccfb7d497aff6a Mon Sep 17 00:00:00 2001 From: Thierno Ibrahima DIOP Date: Thu, 11 Apr 2019 18:52:56 +0000 Subject: [PATCH 1/3] removing doubt in the sentence this may lead to think that in the integer encoding different words can have similar number to represent them. --- site/en/r2/tutorials/text/word_embeddings.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/site/en/r2/tutorials/text/word_embeddings.ipynb b/site/en/r2/tutorials/text/word_embeddings.ipynb index 311d7f206bf..68f87ef3860 100644 --- a/site/en/r2/tutorials/text/word_embeddings.ipynb +++ b/site/en/r2/tutorials/text/word_embeddings.ipynb @@ -103,7 +103,7 @@ "\n", "* The integer-encoding is arbitrary (it does not capture any relationship between words).\n", "\n", - "* An integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because different words may have a similar encoding, this feature-weight combination is not meaningful.\n", + "* An integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because different words having similarity may have totally different encoding, this feature-weight combination is not meaningful.\n", "\n", "### Word embeddings\n", "\n", From 94024fa7d232d271d8b965dcc897ec7d138a4894 Mon Sep 17 00:00:00 2001 From: Thierno Ibrahima DIOP Date: Thu, 11 Apr 2019 20:15:26 +0000 Subject: [PATCH 2/3] correction after @adammichaelwood suggestions --- site/en/r2/tutorials/text/word_embeddings.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/site/en/r2/tutorials/text/word_embeddings.ipynb b/site/en/r2/tutorials/text/word_embeddings.ipynb index 68f87ef3860..7952c53c79e 100644 --- a/site/en/r2/tutorials/text/word_embeddings.ipynb +++ b/site/en/r2/tutorials/text/word_embeddings.ipynb @@ -103,7 +103,7 @@ "\n", "* The integer-encoding is arbitrary (it does not capture any relationship between words).\n", "\n", - "* An integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because different words having similarity may have totally different encoding, this feature-weight combination is not meaningful.\n", + "* An integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because There is no relationship between the similarity of any two words and the similarity of their encodings, this feature-weight combination is not meaningful.\n", "\n", "### Word embeddings\n", "\n", From ad3c7d476dea799fcc7cbb7524934743b8dd4abe Mon Sep 17 00:00:00 2001 From: Thierno Ibrahima DIOP Date: Mon, 15 Apr 2019 17:21:38 +0000 Subject: [PATCH 3/3] fix typo --- site/en/r2/tutorials/text/word_embeddings.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/site/en/r2/tutorials/text/word_embeddings.ipynb b/site/en/r2/tutorials/text/word_embeddings.ipynb index 7952c53c79e..c84129e311e 100644 --- a/site/en/r2/tutorials/text/word_embeddings.ipynb +++ b/site/en/r2/tutorials/text/word_embeddings.ipynb @@ -103,7 +103,7 @@ "\n", "* The integer-encoding is arbitrary (it does not capture any relationship between words).\n", "\n", - "* An integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because There is no relationship between the similarity of any two words and the similarity of their encodings, this feature-weight combination is not meaningful.\n", + "* An integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because there is no relationship between the similarity of any two words and the similarity of their encodings, this feature-weight combination is not meaningful.\n", "\n", "### Word embeddings\n", "\n",