Skip to content

Commit

Permalink
clean
Browse files Browse the repository at this point in the history
  • Loading branch information
jph00 committed Apr 25, 2022
1 parent 2f153dd commit 111e6c5
Show file tree
Hide file tree
Showing 10 changed files with 21 additions and 39 deletions.
2 changes: 1 addition & 1 deletion clean/01_intro.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -564,7 +564,7 @@
"split_at_heading": true
},
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
}
Expand Down
3 changes: 1 addition & 2 deletions clean/02_production.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,6 @@
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"key = os.environ.get('AZURE_SEARCH_KEY', 'XXX')"
]
},
Expand Down Expand Up @@ -701,4 +700,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
}
}
4 changes: 2 additions & 2 deletions clean/04_mnist_basics.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -620,7 +620,7 @@
"metadata": {},
"outputs": [],
"source": [
"def mse(preds, targets): return ((preds-targets)**2).mean().sqrt()"
"def mse(preds, targets): return ((preds-targets)**2).mean()"
]
},
{
Expand Down Expand Up @@ -975,7 +975,7 @@
"metadata": {},
"outputs": [],
"source": [
"weights[0] *= 1.0001"
"with torch.no_grad(): weights[0] *= 1.0001"
]
},
{
Expand Down
12 changes: 7 additions & 5 deletions clean/09_tabular.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@
"source": [
"#hide\n",
"from fastbook import *\n",
"from kaggle import api\n",
"from pandas.api.types import is_string_dtype, is_numeric_dtype, is_categorical_dtype\n",
"from fastai.tabular.all import *\n",
"from sklearn.ensemble import RandomForestRegressor\n",
Expand Down Expand Up @@ -95,7 +94,8 @@
"metadata": {},
"outputs": [],
"source": [
"path = URLs.path('bluebook')\n",
"comp = 'bluebook-for-bulldozers'\n",
"path = URLs.path(comp)\n",
"path"
]
},
Expand All @@ -115,10 +115,12 @@
"metadata": {},
"outputs": [],
"source": [
"from kaggle import api\n",
"\n",
"if not path.exists():\n",
" path.mkdir(parents=true)\n",
" api.competition_download_cli('bluebook-for-bulldozers', path=path)\n",
" file_extract(path/'bluebook-for-bulldozers.zip')\n",
" api.competition_download_cli(comp, path=path)\n",
" shutil.unpack_archive(str(path/f'{comp}.zip'), str(path))\n",
"\n",
"path.ls(file_type='text')"
]
Expand Down Expand Up @@ -1398,7 +1400,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
}
Expand Down
2 changes: 1 addition & 1 deletion clean/10_nlp.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -668,7 +668,7 @@
"split_at_heading": true
},
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
}
Expand Down
10 changes: 5 additions & 5 deletions clean/12_nlp_dive.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -701,7 +701,7 @@
"source": [
"1. If the dataset for your project is so big and complicated that working with it takes a significant amount of time, what should you do?\n",
"1. Why do we concatenate the documents in our dataset before creating a language model?\n",
"1. To use a standard fully connected network to predict the fourth word given the previous three words, what two tweaks do we need to make to ou model?\n",
"1. To use a standard fully connected network to predict the fourth word given the previous three words, what two tweaks do we need to make to our model?\n",
"1. How can we share a weight matrix across multiple layers in PyTorch?\n",
"1. Write a module that predicts the third word given the previous two words of a sentence, without peeking.\n",
"1. What is a recurrent neural network?\n",
Expand All @@ -725,13 +725,13 @@
"1. Why does it help to have two hidden states in the LSTM architecture? What is the purpose of each one?\n",
"1. What are these two states called in an LSTM?\n",
"1. What is tanh, and how is it related to sigmoid?\n",
"1. What is the purpose of this code in `LSTMCell`: `h = torch.stack([h, input], dim=1)`\n",
"1. What is the purpose of this code in `LSTMCell`: `h = torch.cat([h, input], dim=1)`\n",
"1. What does `chunk` do in PyTorch?\n",
"1. Study the refactored version of `LSTMCell` carefully to ensure you understand how and why it does the same thing as the non-refactored version.\n",
"1. Why can we use a higher learning rate for `LMModel6`?\n",
"1. What are the three regularization techniques used in an AWD-LSTM model?\n",
"1. What is \"dropout\"?\n",
"1. Why do we scale the weights with dropout? Is this applied during training, inference, or both?\n",
"1. Why do we scale the acitvations with dropout? Is this applied during training, inference, or both?\n",
"1. What is the purpose of this line from `Dropout`: `if not self.training: return x`\n",
"1. Experiment with `bernoulli_` to understand how it works.\n",
"1. How do you set your model in training mode in PyTorch? In evaluation mode?\n",
Expand All @@ -753,7 +753,7 @@
"source": [
"1. In ` LMModel2`, why can `forward` start with `h=0`? Why don't we need to say `h=torch.zeros(...)`?\n",
"1. Write the code for an LSTM from scratch (you may refer to <<lstm>>).\n",
"1. Search the internet for the GRU architecture and implement it from scratch, and try training a model. See if you can get results similar to those we saw in this chapter. Compare you results to the results of PyTorch's built in `GRU` module.\n",
"1. Search the internet for the GRU architecture and implement it from scratch, and try training a model. See if you can get results similar to those we saw in this chapter. Compare your results to the results of PyTorch's built in `GRU` module.\n",
"1. Take a look at the source code for AWD-LSTM in fastai, and try to map each of the lines of code to the concepts shown in this chapter."
]
},
Expand All @@ -770,7 +770,7 @@
"split_at_heading": true
},
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
}
Expand Down
11 changes: 1 addition & 10 deletions clean/13_convolutions.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -757,8 +757,8 @@
"source": [
"def conv(ni, nf, ks=3, act=True):\n",
" layers = [nn.Conv2d(ni, nf, stride=2, kernel_size=ks, padding=ks//2)]\n",
" layers.append(nn.BatchNorm2d(nf))\n",
" if act: layers.append(nn.ReLU())\n",
" layers.append(nn.BatchNorm2d(nf))\n",
" return nn.Sequential(*layers)"
]
},
Expand Down Expand Up @@ -789,15 +789,6 @@
"learn = fit(5, lr=0.1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learn = fit(5, lr=0.1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down
2 changes: 1 addition & 1 deletion clean/15_arch_details.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@
"metadata": {},
"outputs": [],
"source": [
"head = create_head(512*4, 2, ps=0.5)"
"head = create_head(512*2, 2, ps=0.5)"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions clean/16_accel_sgd.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -409,7 +409,7 @@
"1. How can you get the list of events available to you when writing a callback?\n",
"1. Write the `ModelResetter` callback (without peeking).\n",
"1. How can you access the necessary attributes of the training loop inside a callback? When can you use or not use the shortcuts that go with them?\n",
"1. How can a callback influence the control flow of the training loop.\n",
"1. How can a callback influence the control flow of the training loop?\n",
"1. Write the `TerminateOnNaN` callback (without peeking, if possible).\n",
"1. How do you make sure your callback runs after or before another callback?"
]
Expand All @@ -427,7 +427,7 @@
"source": [
"1. Look up the \"Rectified Adam\" paper, implement it using the general optimizer framework, and try it out. Search for other recent optimizers that work well in practice, and pick one to implement.\n",
"1. Look at the mixed-precision callback with the documentation. Try to understand what each event and line of code does.\n",
"1. Implement your own version of ther learning rate finder from scratch. Compare it with fastai's version.\n",
"1. Implement your own version of the learning rate finder from scratch. Compare it with fastai's version.\n",
"1. Look at the source code of the callbacks that ship with fastai. See if you can find one that's similar to what you're looking to do, to get some inspiration."
]
},
Expand Down
10 changes: 0 additions & 10 deletions clean/17_foundations.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -12,16 +12,6 @@
"fastbook.setup_book()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#hide\n",
"from fastai.gen_doc.nbdoc import *"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down

0 comments on commit 111e6c5

Please sign in to comment.