Skip to content

Commit

Permalink
Update 4_Neural_Style_Transfer_with_Eager_Execution.ipynb
Browse files Browse the repository at this point in the history
Fixed grammar errors and typos.
  • Loading branch information
fuzzythecat committed Nov 5, 2018
1 parent 3a6c5f1 commit d99c916
Showing 1 changed file with 3 additions and 3 deletions.
Expand Up @@ -341,7 +341,7 @@
},
"cell_type": "markdown",
"source": [
"In order toview the outputs of our optimization, we are required to perform the inverse preprocessing step. Furthermore, since our optimized image may take its values anywhere between $- \\infty$ and $\\infty$, we must clip to maintain our values from within the 0-255 range. "
"In order to view the outputs of our optimization, we are required to perform the inverse preprocessing step. Furthermore, since our optimized image may take its values anywhere between $- \\infty$ and $\\infty$, we must clip to maintain our values from within the 0-255 range. "
]
},
{
Expand Down Expand Up @@ -380,7 +380,7 @@
},
"cell_type": "markdown",
"source": [
"### Define content and style representationst\n",
"### Define content and style representations\n",
"In order to get both the content and style representations of our image, we will look at some intermediate layers within our model. As we go deeper into the model, these intermediate layers represent higher and higher order features. In this case, we are using the network architecture VGG19, a pretrained image classification network. These intermediate layers are necessary to define the representation of content and style from our images. For an input image, we will try to match the corresponding style and content target representations at these intermediate layers. \n",
"\n",
"#### Why intermediate layers?\n",
Expand Down Expand Up @@ -1183,7 +1183,7 @@
"### What we covered:\n",
"\n",
"* We built several different loss functions and used backpropagation to transform our input image in order to minimize these losses\n",
" * In order to do this we had to load in an a **pretrained model** and used its learned feature maps to describe the content and style representation of our images.\n",
" * In order to do this we had to load in a **pretrained model** and use its learned feature maps to describe the content and style representation of our images.\n",
" * Our main loss functions were primarily computing the distance in terms of these different representations\n",
"* We implemented this with a custom model and **eager execution**\n",
" * We built our custom model with the Functional API \n",
Expand Down

0 comments on commit d99c916

Please sign in to comment.