Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix some typo en/docs/basics/ #530

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion en/docs/basics/01_quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ from flowvision import datasets
```
[FlowVision](https://github.com/Oneflow-Inc/vision) is a tool library matching with OneFlow, specific to computer vision tasks. It contains a number of models, data augmentation methods, data transformation operations and datasets. Here we import and use the data transformation module `transforms` and datasets module `datasets` provided by FlowVision.

Settting batch size and device:
Setting batch size and device:

```python
BATCH_SIZE=64
Expand Down
2 changes: 1 addition & 1 deletion en/docs/basics/04_build_network.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ Comparing the similarities and differences between the `NeuralNetwork` and `Func

`nn.Sequential` is a special container. Any class inherited from `nn.Module` can be placed in it.

Its specialty is that when Sequential propagates forward, Sequential automatically "concatenates" the layers contained in the container. Specifically, the output of the previous layer will be automatically transferred as the input of the next layer according to the sequence of Sequential added to each layer until the output of the last layer of the whole Moudle is obtained.
Its specialty is that when Sequential propagates forward, Sequential automatically "concatenates" the layers contained in the container. Specifically, the output of the previous layer will be automatically transferred as the input of the next layer according to the sequence of Sequential added to each layer until the output of the last layer of the whole Module is obtained.

The following is an example of building a network without Sequential (not recommended):

Expand Down
2 changes: 1 addition & 1 deletion en/docs/basics/05_autograd.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ tensor(20., dtype=oneflow.float32)

### Disabled Gradient Calculation

By default, OneFlow will trace and calculate gradients of Tensors with `requires_grad = Ture`.
By default, OneFlow will trace and calculate gradients of Tensors with `requires_grad = True`.
However, in some cases, we don't need OneFlow to keep tracing gradients such as just wanting the forward pass for inference. Then we can use [oneflow.no_grad](https://oneflow.readthedocs.io/en/v0.8.1/generated/oneflow.no_grad.html) or [oneflow.Tensor.detach](https://oneflow.readthedocs.io/en/master/generated/oneflow.Tensor.detach.html#oneflow.Tensor.detach) to set.

```python
Expand Down
8 changes: 4 additions & 4 deletions en/docs/basics/08_nn_graph.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ class GraphMyLinear(nn.Graph):
The simple example above contains the important steps needed to customize a Graph:

- Inherits `nn.Graph`.
- Call `super().__init__()` at the begining of `__init__` method to get OneFlow to do the necessary initialization for the Graph.
- Call `super().__init__()` at the beginning of `__init__` method to get OneFlow to do the necessary initialization for the Graph.
- In `__init__`, reuse the `nn.Module` object in Eager mode (`self.model = model`)
- Describes the computational process in `build` method.

Expand Down Expand Up @@ -323,7 +323,7 @@ If you use `print` **after** the Graph object is called, in addition to the stru
**The second** way is that by calling the [debug](https://oneflow.readthedocs.io/en/v0.8.1/generated/oneflow.nn.Graph.debug.html) method of Graph objects, Graph’s debug mode is turned on.

```python
graph_mobile_net_v2.debug(v_level=1) # The defalut of v_level is 0.
graph_mobile_net_v2.debug(v_level=1) # The default of v_level is 0.
```

which can also be written in a simplified way:
Expand All @@ -337,7 +337,7 @@ OneFlow prints debug information when it compiles the computation graph. If the
```text
(GRAPH:GraphMobileNetV2_0:GraphMobileNetV2) end building graph.
(GRAPH:GraphMobileNetV2_0:GraphMobileNetV2) start compiling plan and init graph runtime.
(GRAPH:GraphMobileNetV2_0:GraphMobileNetV2) end compiling plan and init graph rumtime.
(GRAPH:GraphMobileNetV2_0:GraphMobileNetV2) end compiling plan and init graph runtime.
```

The advantage of using `debug` is that the debug information is composed and printed at the same time, which makes it easy to find the problem if there is any error in the graph building process.
Expand All @@ -358,7 +358,7 @@ In addition, in order for developers to have a clearer understanding of the type
| MODULE | Corresponding to `nn.Module` , MODULE can be under the Graph tag, and there is also a hierarchical relationship between multiple modules. | `(MODULE:model:MobileNetV2())`, and `MobileNetV2` reuses the Module class name in Eager mode for users. |
| PARAMETER | Shows the clearer information of weight and bias. In addition, when building the graph, the data content of the tensor is less important, so it is more important for building network to only display the meta information of the tensor. | `(PARAMETER:model.features.0.1.weight:tensor(..., device='cuda:0', size=(32,), dtype=oneflow.float32, requires_grad=True))` |
| BUFFER | Statistical characteristics and other content generated during training, such as running_mean and running_var. | `(BUFFER:model.features.0.1.running_mean:tensor(..., device='cuda:0', size=(32,), dtype=oneflow.float32))` |
| INPUT & OUPTUT | Tensor information representing input and output. | `(INPUT:_model_input.0.0_2:tensor(..., device='cuda:0', is_lazy='True', size=(16, 3, 32, 32), dtype=oneflow.float32))` |
| INPUT & OUTPUT | Tensor information representing input and output. | `(INPUT:_model_input.0.0_2:tensor(..., device='cuda:0', is_lazy='True', size=(16, 3, 32, 32), dtype=oneflow.float32))` |

In addition to the methods described above, getting the parameters of the gradient during the training process, accessing to the learning rate and other functions are also under development and will come up soon.

Expand Down