Skip to content

Commit

Permalink
Merge pull request #2409 from d2l-ai/master
Browse files Browse the repository at this point in the history
Release 1.0.0-beta0
  • Loading branch information
astonzhang committed Dec 15, 2022
2 parents ca706d1 + 1d37244 commit 7047d10
Show file tree
Hide file tree
Showing 212 changed files with 75,262 additions and 3,950 deletions.
9 changes: 5 additions & 4 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -15,18 +15,19 @@ build/_build
build/img
build/data
build/d2l
/chapter_attention-mechanisms/fra-eng.zip
/chapter_attention-mechanisms-and-transformers/fra-eng.zip
/chapter_recurrent-neural-networks/fra-eng.zip
img/*.pdf
aclImdb*
/_build/
graffle/*/*.svg
graffle/*/*.pdf
build/
/chapter_deep-learning-computation/mydict
/chapter_deep-learning-computation/x-file
/chapter_deep-learning-computation/x-files
/chapter_builders-guide/mydict
/chapter_builders-guide/x-file
/chapter_builders-guide/x-files
.idea
.vscode
.pytest_cache
static/latex_style/PT1*
/chapter_hyperparameter_optimization/std.out
10 changes: 10 additions & 0 deletions Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,16 @@ stage("Build and Publish") {
./static/cache.sh store _build/eval_mxnet/data
"""

sh label: "Execute Notebooks [Jax]", script: """set -ex
conda activate ${ENV_NAME}
./static/cache.sh restore _build/eval_jax/data
export XLA_PYTHON_CLIENT_MEM_FRACTION=.70
export TF_CPP_MIN_LOG_LEVEL=3
export TF_FORCE_GPU_ALLOW_GROWTH=true
d2lbook build eval --tab jax
./static/cache.sh store _build/eval_jax/data
"""

sh label: "Execute Notebooks [TensorFlow]", script: """set -ex
conda activate ${ENV_NAME}
./static/cache.sh restore _build/eval_tensorflow/data
Expand Down
1 change: 0 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,4 +97,3 @@ This open source book is made available under the Creative Commons Attribution-S
The sample and reference code within this open source book is made available under a modified MIT license. See the [LICENSE-SAMPLECODE](LICENSE-SAMPLECODE) file.

[Chinese version](https://github.com/d2l-ai/d2l-zh) | [Discuss and report issues](https://discuss.d2l.ai/) | [Code of conduct](CODE_OF_CONDUCT.md) | [Other Information](INFO.md)

Original file line number Diff line number Diff line change
Expand Up @@ -128,15 +128,15 @@ This can be written into code, and freely optimized even for billions of coin fl
#@tab mxnet
# Set up our data
n_H = 8675309
n_T = 25624
n_T = 256245
# Initialize our paramteres
theta = np.array(0.5)
theta.attach_grad()
# Perform gradient descent
lr = 0.00000000001
for iter in range(10):
lr = 1e-9
for iter in range(100):
with autograd.record():
loss = -(n_H * np.log(theta) + n_T * np.log(1 - theta))
loss.backward()
Expand All @@ -150,14 +150,14 @@ theta, n_H / (n_H + n_T)
#@tab pytorch
# Set up our data
n_H = 8675309
n_T = 25624
n_T = 256245
# Initialize our paramteres
theta = torch.tensor(0.5, requires_grad=True)
# Perform gradient descent
lr = 0.00000000001
for iter in range(10):
lr = 1e-9
for iter in range(100):
loss = -(n_H * torch.log(theta) + n_T * torch.log(1 - theta))
loss.backward()
with torch.no_grad():
Expand All @@ -172,14 +172,14 @@ theta, n_H / (n_H + n_T)
#@tab tensorflow
# Set up our data
n_H = 8675309
n_T = 25624
n_T = 256245
# Initialize our paramteres
theta = tf.Variable(tf.constant(0.5))
# Perform gradient descent
lr = 0.00000000001
for iter in range(10):
lr = 1e-9
for iter in range(100):
with tf.GradientTape() as t:
loss = -(n_H * tf.math.log(theta) + n_T * tf.math.log(1 - theta))
theta.assign_sub(lr * t.gradient(loss, theta))
Expand Down
41 changes: 20 additions & 21 deletions chapter_appendix-tools-for-deep-learning/aws.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,15 @@ This process applies to other instances (and other clouds), too, albeit with som

## Creating and Running an EC2 Instance

After logging into your AWS account, click "EC2" (marked by the red box in :numref:`fig_aws`) to go to the EC2 panel.
After logging into your AWS account, click "EC2" (:numref:`fig_aws`) to go to the EC2 panel.

![Open the EC2 console.](../img/aws.png)
:width:`400px`
:label:`fig_aws`

:numref:`fig_ec2` shows the EC2 panel with sensitive account information greyed out.
:numref:`fig_ec2` shows the EC2 panel.

![EC2 panel.](../img/ec2.png)
![The EC2 panel.](../img/ec2.png)
:width:`700px`
:label:`fig_ec2`

Expand Down Expand Up @@ -51,7 +51,7 @@ process an application.

Next, click the "Launch Instance" button marked by the red box in :numref:`fig_ec2` to launch your instance.

We begin by selecting a suitable Amazon Machine Image (AMI). Enter "Ubuntu" in the search box (marked by the red box in :numref:`fig_ubuntu`).
We begin by selecting a suitable Amazon Machine Image (AMI). Select an Ubuntu instance (:numref:`fig_ubuntu`).


![Choose an AMI.](../img/ubuntu-new.png)
Expand All @@ -68,6 +68,7 @@ EC2 provides many different instance configurations to choose from. This can som
| p2 | Kepler K80 | old but often cheap as spot |
| g3 | Maxwell M60 | good trade-off |
| p3 | Volta V100 | high performance for FP16 |
| p4 | Ampere A100 | high performance for large-scale training |
| g4 | Turing T4 | inference optimized FP16/INT8 |
:label:`tab_ec2`

Expand All @@ -79,28 +80,26 @@ All these servers come in multiple flavors indicating the number of GPUs used. F

Note that you should use a GPU-enabled instance with suitable drivers and a GPU-enabled deep learning framework. Otherwise you will not see any benefit from using GPUs.

So far, we have finished the first two of seven steps for launching an EC2 instance, as shown on the top of :numref:`fig_disk`. In this example, we keep the default configurations for the steps "3. Configure Instance", "5. Add Tags", and "6. Configure Security Group". Tap on "4. Add Storage" and increase the default hard disk size to 64 GB (marked in the red box of :numref:`fig_disk`). Note that CUDA by itself already takes up 4 GB.
We go on to select the key pair used to access
the instance. If you do not have a key pair, click "Create new key pair" in :numref:`fig_keypair` to generate a key pair. Subsequently,
you can select the
previously generated key pair.
Make sure that you download the key pair and store it in a safe location if you
generated a new one. This is your only way to SSH into the server.

![Select a key pair.](../img/keypair.png)
:width:`500px`
:label:`fig_keypair`

In this example, we will keep the default configurations for "Network settings" (click the "Edit" button to configure items such as the subnet and security groups). We just increase the default hard disk size to 64 GB (:numref:`fig_disk`). Note that CUDA by itself already takes up 4 GB.

![Modify the hard disk size.](../img/disk.png)
:width:`700px`
:label:`fig_disk`



Finally, go to "7. Review" and click "Launch" to launch the configured
instance. The system will now prompt you to select the key pair used to access
the instance. If you do not have a key pair, select "Create a new key pair" in
the first drop-down menu in :numref:`fig_keypair` to generate a key pair. Subsequently,
you can select "Choose an existing key pair" for this menu and then select the
previously generated key pair. Click "Launch Instances" to launch the created
instance.

![Select a key pair.](../img/keypair.png)
:width:`500px`
:label:`fig_keypair`

Make sure that you download the key pair and store it in a safe location if you
generated a new one. This is your only way to SSH into the server. Click the
Click "Launch Instance" to launch the created
instance. Click the
instance ID shown in :numref:`fig_launching` to view the status of this instance.

![Click the instance ID.](../img/launching.png)
Expand All @@ -111,7 +110,7 @@ instance ID shown in :numref:`fig_launching` to view the status of this instance

As shown in :numref:`fig_connect`, after the instance state turns green, right-click the instance and select `Connect` to view the instance access method.

![View instance access method.](../img/connect.png)
![View the instance access method.](../img/connect.png)
:width:`700px`
:label:`fig_connect`

Expand Down
13 changes: 7 additions & 6 deletions chapter_appendix-tools-for-deep-learning/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,8 @@ If you plan to update a large portion of text or code, then you need to know a l
If you would like to change the code, we recommend you to use the Jupyter Notebook to open these markdown files as described in :numref:`sec_jupyter`. So that you can run and test your changes. Please remember to clear all outputs before submitting your changes, our CI system will execute the sections you updated to generate outputs.

Some sections may support multiple framework implementations.
If you add a new code block not for the default implementation, which is MXNet, please use `#@tab` to mark this block on the beginning line. For example, `#@tab pytorch` for a PyTorch code block, `#@tab tensorflow` for a TensorFlow code block, or `#@tab all` a shared code block for all implementations. You may refer to the [`d2lbook`](http://book.d2l.ai/user/code_tabs.html) package for more information.
If you add a new code block, please use `%%tab` to mark this block on the beginning line. For example,
`%%tab pytorch` for a PyTorch code block, `%%tab tensorflow` for a TensorFlow code block, or `%%tab all` a shared code block for all implementations. You may refer to the [`d2lbook`](http://book.d2l.ai/user/code_tabs.html) package for more information.

## Submitting Major Changes

Expand Down Expand Up @@ -66,10 +67,10 @@ git clone https://github.com/your_github_username/d2l-en.git

### Editing and Pushing

Now it is time to edit the book. It is best to edit it in the Jupyter Notebook following instructions in :numref:`sec_jupyter`. Make the changes and check that they are OK. Assume that we have modified a typo in the file `~/d2l-en/chapter_appendix_tools/how-to-contribute.md`.
Now it is time to edit the book. It is best to edit it in the Jupyter Notebook following instructions in :numref:`sec_jupyter`. Make the changes and check that they are OK. Assume that we have modified a typo in the file `~/d2l-en/chapter_appendix-tools-for-deep-learning/contributing.md`.
You can then check which files you have changed.

At this point Git will prompt that the `chapter_appendix_tools/how-to-contribute.md` file has been modified.
At this point Git will prompt that the `chapter_appendix-tools-for-deep-learning/contributing.md` file has been modified.

```
mylaptop:d2l-en me$ git status
Expand All @@ -80,15 +81,15 @@ Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: chapter_appendix_tools/how-to-contribute.md
modified: chapter_appendix-tools-for-deep-learning/contributing.md
```


After confirming that this is what you want, execute the following command:

```
git add chapter_appendix_tools/how-to-contribute.md
git commit -m 'fix typo in git documentation'
git add chapter_appendix-tools-for-deep-learning/contributing.md
git commit -m 'Fix a typo in git documentation'
git push
```

Expand Down
Loading

0 comments on commit 7047d10

Please sign in to comment.