-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reformat docstrings in espnet/asr #914
Conversation
Masao-Someki
commented
Jun 22, 2019
•
edited
Loading
edited
- Reformat docstrings of the following files.
- ./espnet/asr/asr_mix_utils.py
- ./espnet/asr/asr_utils.py
- ./espnet/asr/chainer_backend/asr.py
- ./espnet/asr/pytorch_backend/asr.py
- ./espnet/asr/pytorch_backend/asr_mix.py
- Fixed typo
* reformat docstrings of the following files. [x] ./espnet/asr/asr_mix_utils.py [x] ./espnet/asr/asr_utils.py [x] ./espnet/asr/chainer_backend/asr.py [ ] ./espnet/asr/pytorch_backend/asr.py [ ] ./espnet/asr/pytorch_backend/asr_mix.py fixed typo
Thank you so much! I have some requests and comments.
e.g.
|
Okay! |
I couldn't figure out the type of |
* fixed pytorch_backend * fixed some typos in `./asr_mix_utils.py` , `asr_utils.py` , `./chainer_backend/asr.py` . * fixed typos
espnet/espnet/asr/asr_utils.py Line 475 in 6c9d976
Maybe float. |
Oh...I don't know why I miss that sentence. |
I'll add details to data-type in args and results tomorrow.
|
Codecov Report
@@ Coverage Diff @@
## v.0.5.0 #914 +/- ##
========================================
Coverage 49.32% 49.32%
========================================
Files 100 100
Lines 10724 10724
========================================
Hits 5290 5290
Misses 5434 5434
Continue to review full report at Codecov.
|
It is better to unify the type format e.g. |
* update data_type
I coudn't find out the following, so please collect or teach me where to watch...
|
device (int or dict): The destination device info to send variables. In the case of cpu or single gpu, `device=-1 or 0`, respectively.
In the case of multi-gpu, `device={"main":0, "sub_1": 1, ...}`.
device (torch.device): The destination device to send tensor.
trainer (chainer.training.Trainer): Chainer's trainer instance.
accum_grad (int): The number of gradient accumulation. if set to 2, the network parameters will be updated once in twice, i.e. actual batchsize will be doubled. Could you remove following highlighted warning? |
I fixed the highlighted warnings |
fixed typo
If ready to merge, please let me know. |
It's ready. Could you review it? |
I will check today or tomorrow. |
espnet/asr/asr_mix_utils.py
Outdated
"""Make batch set from json dictionary. | ||
|
||
Args: | ||
data (dict[str, list[Any]): Dictionary loaded from data.json. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: dict[str, list[Any]]
espnet/asr/asr_mix_utils.py
Outdated
batch_size (int): Batch size. | ||
max_length_in (int): Maximum length of input to decide adaptive batch size. | ||
max_length_out (int): Maximum length of output to decide adaptive batch size. | ||
num_batches (int): # Number of batches to use (for debug). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo #
?
espnet/asr/asr_mix_utils.py
Outdated
min_batch_size (int): Mininum batch size (for multi-gpu). | ||
|
||
Returns: | ||
list[tuple[str, dict[str, list[dict[str, Any]]]]: List of batches. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: list[tuple[str, dict[str, list[dict[str, Any]]]]]
list (1) of tuple (2) of dict (3) of list (4) of dict(5), you need 5 ]
. LOL
I have different opinion. Because there are PEPs for type annotations, we can use them as clear rules. For example, they should be https://www.python.org/dev/peps/pep-0484/ |
espnet/asr/asr_utils.py
Outdated
:param eta float {0.01,0.3,1.0} | ||
""" | ||
Args: | ||
model (Torch model): model. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
model (torch.nn.Module): model.
?
@@ -169,8 +196,12 @@ def _restore_snapshot(model, snapshot, load_fn=chainer.serializers.load_npz): | |||
|
|||
|
|||
def adadelta_eps_decay(eps_decay): | |||
"""Extension to perform adadelta eps decay""" | |||
"""Extension to perform adadelta eps decay. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Args:
eps_decay (float): decay rate of eps
Thank you for your comments, Shigeki. |
Fixed typos fixed typo this is a commit for typos. I tried a few times, so I rebase some commits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you so much.
Could you resolve conflict?
espnet/asr/asr_mix_utils.py
Outdated
"""Visualize attention weights matrix. | ||
|
||
Args: | ||
att_w(Tensor): attention weight matrix |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
att_w(Tensor): attention weight matrix | |
att_w(Tensor): Attention weight matrix. |
espnet/asr/asr_mix_utils.py
Outdated
|
||
Args: | ||
js (dict[str, Any]): Groundtruth utterance dict. | ||
nbest_hyps_sd (list[dict[str, Any]]): List of hypothesis for multi_speakers: nutts x nspkrs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nbest_hyps_sd (list[dict[str, Any]]): List of hypothesis for multi_speakers: nutts x nspkrs. | |
nbest_hyps_sd (list[dict[str, Any]]): List of hypothesis for multi_speakers (# Utts x # Spkrs). |
espnet/asr/asr_utils.py
Outdated
|
||
Args: | ||
eps_decay (float): decay rate of eps |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eps_decay (float): decay rate of eps | |
eps_decay (float): Decay rate of eps. |
espnet/asr/asr_utils.py
Outdated
:param eta float {0.01,0.3,1.0} | ||
""" | ||
Args: | ||
model (torch.nn.model): model. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
model (torch.nn.model): model. | |
model (torch.nn.model): Model. |
espnet/asr/asr_utils.py
Outdated
""" | ||
Args: | ||
model (torch.nn.model): model. | ||
iteration (int): number of iteration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
iteration (int): number of iteration. | |
epoch (int): Number of epochs. |
device (torch.device): The device to send to. | ||
|
||
Returns: | ||
tuple(torch.Tensor, torch.Tensor, torch.Tensor) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tuple(torch.Tensor, torch.Tensor, torch.Tensor) | |
tuple(torch.Tensor, torch.Tensor, torch.Tensor): Transformed batch. |
espnet/asr/asr_mix_utils.py
Outdated
@@ -12,12 +12,12 @@ | |||
import matplotlib | |||
import numpy as np | |||
|
|||
''' | |||
""" | |||
reuse modules in asr_utils: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reuse modules in asr_utils: | |
Utility functions for ASR mix recipes. Reuse following modules in asr_utils: |
espnet/asr/asr_mix_utils.py
Outdated
@@ -12,12 +12,12 @@ | |||
import matplotlib | |||
import numpy as np | |||
|
|||
''' | |||
""" | |||
reuse modules in asr_utils: | |||
CompareValueTrigger, restore_snapshot, _restore_snapshot, adadelta_eps_decay, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CompareValueTrigger, restore_snapshot, _restore_snapshot, adadelta_eps_decay, | |
CompareValueTrigger, restore_snapshot, adadelta_eps_decay, |
espnet/asr/asr_mix_utils.py
Outdated
@@ -12,12 +12,12 @@ | |||
import matplotlib | |||
import numpy as np | |||
|
|||
''' | |||
""" | |||
reuse modules in asr_utils: | |||
CompareValueTrigger, restore_snapshot, _restore_snapshot, adadelta_eps_decay, | |||
_adadelta_eps_decay, torch_snapshot, _torch_snapshot_object, AttributeDict, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_adadelta_eps_decay, torch_snapshot, _torch_snapshot_object, AttributeDict, | |
chainer_load, torch_snapshot, torch_save, torch_load, torch_resume |
espnet/asr/asr_mix_utils.py
Outdated
@@ -12,12 +12,12 @@ | |||
import matplotlib | |||
import numpy as np | |||
|
|||
''' | |||
""" | |||
reuse modules in asr_utils: | |||
CompareValueTrigger, restore_snapshot, _restore_snapshot, adadelta_eps_decay, | |||
_adadelta_eps_decay, torch_snapshot, _torch_snapshot_object, AttributeDict, | |||
get_model_conf, chainer_load, torch_save, torch_load, torch_resume |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get_model_conf, chainer_load, torch_save, torch_load, torch_resume | |
AttributeDict, get_model_conf |
* update docstring for review.
I wonder why L86 in asr_utils.py in branch espnet/espnet/asr/asr_utils.py Line 86 in 5a966c3
The parametor is set as integer value, so I think it should be MT iaxis=1 . Should I set "1" as a string in docstring?
|
* update kan-bayashi's review * resolve conflicts
espnet/espnet/asr/asr_utils.py Line 132 in 5a966c3
It seems iaxis should be int. |
I think all the docstrings are updated, with no typos.. |
Sorry for late reply. |
I fixed the conflicts. |
Co-Authored-By: Tomoki Hayashi <hayashi.tomoki@g.sp.m.is.nagoya-u.ac.jp>
Co-Authored-By: Tomoki Hayashi <hayashi.tomoki@g.sp.m.is.nagoya-u.ac.jp>
thanks a lot! |