Skip to content

Commit

Permalink
Edit README and project notebook for new FloydHub dataset
Browse files Browse the repository at this point in the history
  • Loading branch information
mcleonard committed Aug 24, 2017
1 parent 907e8a2 commit cf86fcb
Show file tree
Hide file tree
Showing 2 changed files with 26 additions and 73 deletions.
29 changes: 7 additions & 22 deletions image-classification/README.md
Expand Up @@ -16,29 +16,24 @@ You are not required to use FloydHub for this project, but we've provided instru

(a page with authentication token will open; you will need to copy the token into your terminal)

2. Clone this repository:

git clone https://github.com/ludwiktrammer/deep-learning.git

Note: There are couple minor differences between this repository and the original Udacity repository. You can read about them [in README](https://github.com/ludwiktrammer/deep-learning/tree/master/image-classification#how-is-this-repository-different-from-the-original). To follow this instructions you need to use this repository.

3. Enter the folder for the image classification project:
4. Enter the folder for the image classification project:

cd image-classification

4. Initiate a Floyd project:
5. Initiate a Floyd project:

floyd init dlnd_image_classification

5. Run the project:
6. Run the project:

floyd run --gpu --env tensorflow --mode jupyter --data diSgciLH4WA7HpcHNasP9j
floyd run --data mat_udacity/datasets/udacity-cifar-10/1:cifar --mode jupyter --gpu --env tensorflow-1.2

It will be run on a machine with GPU (`--gpu`), using a Tenserflow environment (`--env tensorflow`), as a Jupyter notebook (`--mode jupyter`), with Floyd's built-in cifar-10 dataset available (`--data diSgciLH4WA7HpcHNasP9j`).
It will be run on a machine with GPU (`--gpu`), using a Tenserflow environment (`--env tensorflow-1.2`), as a Jupyter notebook (`--mode jupyter`), with the cifar-10 dataset available (`--data mat_udacity/datasets/udacity-cifar-10/1:cifar`).

6. Wait for the Jupyter notebook to become available and then access the URL displayed in the terminal (described as "path to jupyter notebook"). You will see the notebook.
7. Wait for the Jupyter notebook to become available and then access the URL displayed in the terminal (described as "path to jupyter notebook"). You will see the notebook.

7. Remember to explicitly stop the experiment when you are not using the notebook. As long as it runs (even in the background) it will cost GPU hours. You can stop an experiment in the ["Experiments" section on floyd.com](https://www.floydhub.com/experiments) or using the `floyd stop` command:
8. Remember to explicitly stop the experiment when you are not using the notebook. As long as it runs (even in the background) it will cost GPU hours. You can stop an experiment in the ["Experiments" section on floyd.com](https://www.floydhub.com/experiments) or using the `floyd stop` command:

floyd stop ID

Expand All @@ -53,13 +48,3 @@ Alternatively, If you already stoped the experiment, you can still download the
(where ID is the "RUN ID" displayed in the terminal when you run the project; if you lost it you can also find it in the ["Experiments" section on floyd.com](https://www.floydhub.com/experiments))

Just run the command above, download `dlnd_image_classification.ipynb` and replace your local version with the newly downloaded one.

## How is this repository different from [the original](https://github.com/udacity/deep-learning)?

1. I added support for Floyds built-in cifar-10 dataset. If its presence is detected, it will be used, without a need to download anything. ([see the commit](https://github.com/ludwiktrammer/deep-learning/commit/2e84ff7852905f154f1692f67ca15da28ac43149), [learn more abut datasets provided by Floyd](http://docs.floydhub.com/guides/datasets/))

2. I added a `floyd_requirements.txt` file, so an additional dependency is automatically taken care of. ([see the commit](https://github.com/ludwiktrammer/deep-learning/commit/80b459411d4395dacf8f46be0b028c81858bd97a), [learn more about `.floyd_requirements.txt` files](http://docs.floydhub.com/home/installing_dependencies/))

3. I added a `.floydignore` file to stop local data from being uploaded to Floyd - which wastes time and may even result in a timeout ([see the commit](https://github.com/ludwiktrammer/deep-learning/commit/30d4b536b67366feef38425ce1406e969452717e), [learn more about `.floydignore` files](http://docs.floydhub.com/home/floyd_ignore/))

3. I added this README
70 changes: 19 additions & 51 deletions image-classification/dlnd_image_classification.ipynb
Expand Up @@ -15,9 +15,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"\"\"\"\n",
Expand All @@ -32,7 +30,7 @@
"cifar10_dataset_folder_path = 'cifar-10-batches-py'\n",
"\n",
"# Use Floyd's cifar-10 dataset if present\n",
"floyd_cifar10_location = '/input/cifar-10/python.tar.gz'\n",
"floyd_cifar10_location = '/cifar/cifar-10-python.tar.gz'\n",
"if isfile(floyd_cifar10_location):\n",
" tar_gz_path = floyd_cifar10_location\n",
"else:\n",
Expand Down Expand Up @@ -87,9 +85,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
Expand All @@ -116,9 +112,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"def normalize(x):\n",
Expand Down Expand Up @@ -150,9 +144,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"def one_hot_encode(x):\n",
Expand Down Expand Up @@ -190,9 +182,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"\"\"\"\n",
Expand Down Expand Up @@ -264,9 +254,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"import tensorflow as tf\n",
Expand Down Expand Up @@ -329,9 +317,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):\n",
Expand Down Expand Up @@ -366,9 +352,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"def flatten(x_tensor):\n",
Expand Down Expand Up @@ -398,9 +382,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"def fully_conn(x_tensor, num_outputs):\n",
Expand Down Expand Up @@ -433,9 +415,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"def output(x_tensor, num_outputs):\n",
Expand Down Expand Up @@ -473,9 +453,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"def conv_net(x, keep_prob):\n",
Expand Down Expand Up @@ -564,9 +542,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):\n",
Expand Down Expand Up @@ -599,9 +575,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"def print_stats(session, feature_batch, label_batch, cost, accuracy):\n",
Expand Down Expand Up @@ -657,9 +631,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"\"\"\"\n",
Expand Down Expand Up @@ -690,9 +662,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"\"\"\"\n",
Expand Down Expand Up @@ -733,9 +703,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"\"\"\"\n",
Expand Down Expand Up @@ -830,9 +798,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.2"
"version": "3.6.0"
}
},
"nbformat": 4,
"nbformat_minor": 0
"nbformat_minor": 1
}

0 comments on commit cf86fcb

Please sign in to comment.