Skip to content

Commit

Permalink
[Hotfix] R0.17 (#1999)
Browse files Browse the repository at this point in the history
* fix requirement lib versions

* bump to 0.17.0.post0

* Update index.md

* 300 univ on map and names (#1960)

* 300 univ on map and names

* rm The

* logo reorder

* rename 204 logos, add 30 logo names

* add 30 logos

* reorder logo names

* add missing logos

* add univ logos

* univ num

* Add SageMaker Studio Lab buttons and announcements on frontpage (#1974)

* Add SageMaker Studio Lab buttons and announcements on frontpage

* fix html path

* Fix: Epoch instead of batch (#1975)

* Fix: Epoch instead of batch

* Epoch -> epoch

* [PyTorch] Fix data synchronization (#1978)

* BUG: Fix  in bert_dataset (#1979)

* revise bert pretraining lr

* [PyTorch] Fix semantic segmentation normalization (#1980)

* Update semantic-segmentation-and-dataset.md

* BUG: Fix BLEU calculation (#1981)

* BUG: Fix BLEU calculation

* Save BLEU in d2l.tensorflow

* sync lib

* Bug fixes for 0.17post1 release (#1984)

* Fix loss computation in train_epoch_ch3

* Fix loss reduction=none and l2loss in underfit-overfit

* Fix loss reduction=none and l2loss in sequence models

* Fix loss reduction=none in minibatch-sgd

* Fix loss reduction=none and l2loss in weight decay

* drop 1/2 factor mseloss

* drop 1/2 factor mseloss underfit-overfit

* drop 1/2 factor mseloss weight-decay

* drop 1/2 factor minibatch-sgd

* drop 1/2 factor sequence

* sync d2l lib 1/2 drop

* sync d2l tf lib 1/2 drop

* sync d2l torch lib 1/2 drop

* add text in linear regression about 1/2 factor

* drop 1/2 factor attention

* tf drop 1/2 l2 loss factor attention

* fix broken paragraph

* bump ver to 0.17.1

* update affi

* Link SMStudio lab to getting started page

* Update README.md

* Update Jenkinsfile

* Update Jenkinsfile

* Update ndarray.md

Co-authored-by: Anirudh Dagar <anirudhdagar6@gmail.com>
  • Loading branch information
astonzhang and AnirudhDagar committed Dec 28, 2021
1 parent a652a72 commit 14c68f7
Show file tree
Hide file tree
Showing 3 changed files with 49 additions and 16 deletions.
4 changes: 2 additions & 2 deletions Jenkinsfile
Expand Up @@ -70,7 +70,7 @@ stage("Build and Publish") {
sh label:"Release", script:"""set -ex
conda activate ${ENV_NAME}
d2lbook build pkg
d2lbook deploy html pdf pkg colab sagemaker slides --s3 s3://en.d2l.ai/
d2lbook deploy html pdf pkg colab sagemaker slides --s3 s3://${LANG}.d2l.ai/
"""

sh label:"Release d2l", script:"""set -ex
Expand All @@ -82,7 +82,7 @@ stage("Build and Publish") {
} else {
sh label:"Publish", script:"""set -ex
conda activate ${ENV_NAME}
d2lbook deploy html pdf slides --s3 s3://preview.d2l.ai/${JOB_NAME}/
d2lbook deploy html pdf --s3 s3://preview.d2l.ai/${JOB_NAME}/
"""
if (env.BRANCH_NAME.startsWith("PR-")) {
pullRequest.comment("Job ${JOB_NAME}/${BUILD_NUMBER} is complete. \nCheck the results at http://preview.d2l.ai/${JOB_NAME}/")
Expand Down
2 changes: 1 addition & 1 deletion README.md
Expand Up @@ -6,7 +6,7 @@

[![Build Status](http://ci.d2l.ai/job/d2l-en/job/master/badge/icon)](http://ci.d2l.ai/job/d2l-en/job/master/)

[Book website](https://d2l.ai/) | [STAT 157 Course at UC Berkeley, Spring 2019](http://courses.d2l.ai/berkeley-stat-157/index.html) | Latest version: v0.16.6
[Book website](https://d2l.ai/) | [STAT 157 Course at UC Berkeley, Spring 2019](http://courses.d2l.ai/berkeley-stat-157/index.html)

<h5 align="center"><i>The best way to understand deep learning is learning by doing.</i></h5>

Expand Down
59 changes: 46 additions & 13 deletions chapter_preliminaries/ndarray.md
Expand Up @@ -71,19 +71,52 @@ import tensorflow as tf
```

[**A tensor represents a (possibly multi-dimensional) array of numerical values.**]
With one axis, a tensor corresponds (in math) to a *vector*.
With two axes, a tensor corresponds to a *matrix*.
Tensors with more than two axes do not have special
mathematical names.
With one axis, a tensor is called a *vector*.
With two axes, a tensor is called a *matrix*.
With $k > 2$ axes, we drop the specialized names
and just refer to the object as a $k^\mathrm{th}$ *order tensor*.

To start, we can use `arange` to create a row vector `x`
containing the first 12 integers starting with 0,
though they are created as floats by default.
Each of the values in a tensor is called an *element* of the tensor.
For instance, there are 12 elements in the tensor `x`.
Unless otherwise specified, a new tensor
will be stored in main memory and designated for CPU-based computation.
:begin_tab:`mxnet`
MXNet provides a variety of functions
for creating new tensors
prepopulated with values.
For example, by invoking `arange(n)`,
we can create a vector of evenly spaced values,
starting at 0 (included)
and ending at `n` (not included).
By default, the interval size is $1$.
Unless otherwise specified,
new tensors are stored in main memory
and designated for CPU-based computation.
:end_tab:

:begin_tab:`pytorch`
PyTorch provides a variety of functions
for creating new tensors
prepopulated with values.
For example, by invoking `arange(n)`,
we can create a vector of evenly spaced values,
starting at 0 (included)
and ending at `n` (not included).
By default, the interval size is $1$.
Unless otherwise specified,
new tensors are stored in main memory
and designated for CPU-based computation.
:end_tab:

:begin_tab:`tensorflow`
TensorFlow provides a variety of functions
for creating new tensors
prepopulated with values.
For example, by invoking `range(n)`,
we can create a vector of evenly spaced values,
starting at 0 (included)
and ending at `n` (not included).
By default, the interval size is $1$.
Unless otherwise specified,
new tensors are stored in main memory
and designated for CPU-based computation.
:end_tab:

```{.python .input}
x = np.arange(12)
Expand All @@ -92,13 +125,13 @@ x

```{.python .input}
#@tab pytorch
x = torch.arange(12)
x = torch.arange(12, dtype=torch.float32)
x
```

```{.python .input}
#@tab tensorflow
x = tf.range(12)
x = tf.range(12, dtype=tf.float32)
x
```

Expand Down

0 comments on commit 14c68f7

Please sign in to comment.