Skip to content

Commit

Permalink
Rearrange Folder
Browse files Browse the repository at this point in the history
  • Loading branch information
khanhnamle1994 committed Jul 18, 2018
1 parent 50573f9 commit dcf3d29
Show file tree
Hide file tree
Showing 9 changed files with 14 additions and 14 deletions.
12 changes: 6 additions & 6 deletions .ipynb_checkpoints/TensorBoard-Visualization-checkpoint.ipynb
Expand Up @@ -89,7 +89,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's consider around 5000 images as part of the embedding."
"Let's consider around 2500 images as part of the embedding."
]
},
{
Expand All @@ -100,7 +100,7 @@
},
"outputs": [],
"source": [
"embed_count = 5000"
"embed_count = 2500"
]
},
{
Expand Down Expand Up @@ -310,7 +310,7 @@
"source": [
"The above figure represents the grayscale fashion products representation in lower dimension.Let us look more closer by zooming the picture as below:\n",
"\n",
"![](zoom-in-pic.png)"
"![zoom-in](images/zoom-in-pic.png)"
]
},
{
Expand Down Expand Up @@ -349,16 +349,16 @@
"**Principal Component Analysis** \n",
"A straightforward technique for reducing dimensions is Principal Component Analysis (PCA). The Embedding Projector computes the top 10 principal components. The menu lets you project those components onto any combination of two or three. PCA is a linear projection, often effective at examining global geometry.\n",
"\n",
"![PCA](PCA-Vis.gif)\n",
"![PCA](images/PCA-Vis.gif)\n",
"\n",
"**t-SNE** \n",
"A popular non-linear dimensionality reduction technique is t-SNE. The Embedding Projector offers both two- and three-dimensional t-SNE views. Layout is performed client-side animating every step of the algorithm. Because t-SNE often preserves some local structure, it is useful for exploring local neighborhoods and finding clusters.\n",
"\n",
"![T-SNE](Tsne-Vis.png)\n",
"![T-SNE](images/Tsne-Vis.png)\n",
"\n",
"**Custom** You can also construct specialized linear projections based on text searches for finding meaningful directions in space. To define a projection axis, enter two search strings or regular expressions. The program computes the centroids of the sets of points whose labels match these searches, and uses the difference vector between centroids as a projection axis.\n",
"\n",
"![Custom](Custom-Vis.png)\n",
"![Custom](images/Custom-Vis.png)\n",
"\n",
"## Further Exploration\n",
"You can explore visually by zooming, rotating, and panning using natural click-and-drag gestures. Hovering your mouse over a point will show any metadata for that point. You can also inspect nearest-neighbor subsets. Clicking on a point causes the right pane to list the nearest neighbors, along with distances to the current point. The nearest-neighbor points are also highlighted in the projection. It is sometimes useful to restrict the view to a subset of points and perform projections only on those points. To do so, you can select points in multiple ways:\n",
Expand Down
2 changes: 1 addition & 1 deletion .ipynb_checkpoints/VGG19-checkpoint.ipynb
Expand Up @@ -313,7 +313,7 @@
"\n",
"CNNs used for image classification comprise two parts: they start with a series of pooling and convolution layers, and they end with a densely-connected classifier. The first part is called the **\"convolutional base\"** of the model. In the case of convnets, **\"feature extraction\"** will simply consist of taking the convolutional base of a previously-trained network, running the new data through it, and training a new classifier on top of the output.\n",
"\n",
"![feature-extraction](swapping_fc_classifier.png)\n",
"![feature-extraction](images/swapping_fc_classifier.png)\n",
"\n",
"Why only reuse the convolutional base? Could we reuse the densely-connected classifier as well? In general, it should be avoided. The reason is simply that the representations learned by the convolutional base are likely to be more generic and therefore more reusable: the feature maps of a convnet are presence maps of generic concepts over a picture, which is likely to be useful regardless of the computer vision problem at hand. On the other end, the representations learned by the classifier will necessarily be very specific to the set of classes that the model was trained on -- they will only contain information about the presence probability of this or that class in the entire picture. Additionally, representations found in densely-connected layers no longer contain any information about where objects are located in the input image: these layers get rid of the notion of space, whereas the object location is still described by convolutional feature maps. For problems where object location matters, densely-connected features would be largely useless.\n",
"\n",
Expand Down
12 changes: 6 additions & 6 deletions TensorBoard-Visualization.ipynb
Expand Up @@ -89,7 +89,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's consider around 5000 images as part of the embedding."
"Let's consider around 2500 images as part of the embedding."
]
},
{
Expand All @@ -100,7 +100,7 @@
},
"outputs": [],
"source": [
"embed_count = 5000"
"embed_count = 2500"
]
},
{
Expand Down Expand Up @@ -310,7 +310,7 @@
"source": [
"The above figure represents the grayscale fashion products representation in lower dimension.Let us look more closer by zooming the picture as below:\n",
"\n",
"![](zoom-in-pic.png)"
"![zoom-in](images/zoom-in-pic.png)"
]
},
{
Expand Down Expand Up @@ -349,16 +349,16 @@
"**Principal Component Analysis** \n",
"A straightforward technique for reducing dimensions is Principal Component Analysis (PCA). The Embedding Projector computes the top 10 principal components. The menu lets you project those components onto any combination of two or three. PCA is a linear projection, often effective at examining global geometry.\n",
"\n",
"![PCA](PCA-Vis.gif)\n",
"![PCA](images/PCA-Vis.gif)\n",
"\n",
"**t-SNE** \n",
"A popular non-linear dimensionality reduction technique is t-SNE. The Embedding Projector offers both two- and three-dimensional t-SNE views. Layout is performed client-side animating every step of the algorithm. Because t-SNE often preserves some local structure, it is useful for exploring local neighborhoods and finding clusters.\n",
"\n",
"![T-SNE](Tsne-Vis.png)\n",
"![T-SNE](images/Tsne-Vis.png)\n",
"\n",
"**Custom** You can also construct specialized linear projections based on text searches for finding meaningful directions in space. To define a projection axis, enter two search strings or regular expressions. The program computes the centroids of the sets of points whose labels match these searches, and uses the difference vector between centroids as a projection axis.\n",
"\n",
"![Custom](Custom-Vis.png)\n",
"![Custom](images/Custom-Vis.png)\n",
"\n",
"## Further Exploration\n",
"You can explore visually by zooming, rotating, and panning using natural click-and-drag gestures. Hovering your mouse over a point will show any metadata for that point. You can also inspect nearest-neighbor subsets. Clicking on a point causes the right pane to list the nearest neighbors, along with distances to the current point. The nearest-neighbor points are also highlighted in the projection. It is sometimes useful to restrict the view to a subset of points and perform projections only on those points. To do so, you can select points in multiple ways:\n",
Expand Down
2 changes: 1 addition & 1 deletion VGG19.ipynb
Expand Up @@ -313,7 +313,7 @@
"\n",
"CNNs used for image classification comprise two parts: they start with a series of pooling and convolution layers, and they end with a densely-connected classifier. The first part is called the **\"convolutional base\"** of the model. In the case of convnets, **\"feature extraction\"** will simply consist of taking the convolutional base of a previously-trained network, running the new data through it, and training a new classifier on top of the output.\n",
"\n",
"![feature-extraction](swapping_fc_classifier.png)\n",
"![feature-extraction](images/swapping_fc_classifier.png)\n",
"\n",
"Why only reuse the convolutional base? Could we reuse the densely-connected classifier as well? In general, it should be avoided. The reason is simply that the representations learned by the convolutional base are likely to be more generic and therefore more reusable: the feature maps of a convnet are presence maps of generic concepts over a picture, which is likely to be useful regardless of the computer vision problem at hand. On the other end, the representations learned by the classifier will necessarily be very specific to the set of classes that the model was trained on -- they will only contain information about the presence probability of this or that class in the entire picture. Additionally, representations found in densely-connected layers no longer contain any information about where objects are located in the input image: these layers get rid of the notion of space, whereas the object location is still described by convolutional feature maps. For problems where object location matters, densely-connected features would be largely useless.\n",
"\n",
Expand Down
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes

0 comments on commit dcf3d29

Please sign in to comment.