Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discussion]Create show_result.sh #874

Merged
merged 10 commits into from Jun 21, 2019

Conversation

@kamo-naoyuki
Copy link
Contributor

commented Jun 18, 2019

#786

This file is an enhancement of utils/get_sys_info.sh.

  • Usage:
cd egs/an4/asr1/
utils/show_result.sh
<!-- Generated by ./show_result.sh -->
# RESULTS
## Environments
- date: `Tue Jun 18 18:22:28 JST 2019`
- system information: `Linux  <hostname> 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux`
- python version: `3.7.2 (default, Dec 29 2018, 06:19:36)  [GCC 7.3.0]`
- espnet version: `espnet 0.3.1`
- chainer version: `chainer 5.0.0`
- pytorch version: `pytorch 1.0.0`
- Git hash: `88c81722113bb83a20128c38eceeb951c2d7964e`
  - Commit date: `Sat May 25 06:55:17 2019 -0400`

## train_nodev_chainer_blstmp_e4_subsample1_2_2_1_1_unit320_proj320_d1_unit300_location_aconvc10_aconvf100_mtlalpha0.5_adadelta_sampprob0.0_bs30_mli800_mlo150
### CER

|dataset|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|
|decode_test_beam20_emodel.acc.best_p0.0_len0.0-0.0_ctcw0.5|130|2565|81.2|7.4|11.3|4.2|23.0|74.6|
|decode_train_dev_beam20_emodel.acc.best_p0.0_len0.0-0.0_ctcw0.5|100|1915|76.1|9.0|14.9|2.1|26.0|82.0|

## train_nodev_pytorch_blstmp_e4_subsample1_2_2_1_1_unit320_proj320_d1_unit300_location_aconvc10_aconvf100_mtlalpha0.5_adadelta_sampprob0.0_bs30_mli800_mlo150
### CER

|dataset|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|
|decode_test_beam20_emodel.acc.best_p0.0_len0.0-0.0_ctcw0.5|130|2565|91.2|3.6|5.2|0.7|9.5|56.2|
|decode_train_dev_beam20_emodel.acc.best_p0.0_len0.0-0.0_ctcw0.5|100|1915|84.4|5.4|10.1|1.5|17.1|70.0|

## train_nodev_pytorch_train_mtlalpha1.0
### CER

|dataset|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|
|decode_test_decode_ctcweight1.0_lm_word100|130|2565|92.9|3.4|3.7|1.1|8.2|52.3|
|decode_train_dev_decode_ctcweight1.0_lm_word100|100|1915|86.1|6.7|7.2|1.8|15.7|69.0|
@kamo-naoyuki

This comment has been minimized.

Copy link
Contributor Author

commented Jun 18, 2019

I worry about uname -a. It may be not better to reveal the hostname for security reason.

@sw005320

This comment has been minimized.

Copy link
Contributor

commented Jun 18, 2019

Oh, this is a very beautiful way!
You can replace my ugly utils/get_sys_info.sh with this.
Could you also modify CCONTRIBUTING.md accordingly?

Also, sure I agree with you that we should avoid having a hostname.
Could you implement it?

@ShigekiKarita

This comment has been minimized.

Copy link
Contributor

commented Jun 19, 2019

Awesome. I have been always making some scripts like this when I wrote papers.

@kamo-naoyuki

This comment has been minimized.

Copy link
Contributor Author

commented Jun 19, 2019

I think the training time and machine info, especially what GPU device was used, are also important.
uname -a shows not the training machine, but actually the execution host of this script.

It is possible to get them from the expdir if asr_train.py saves such information, e.g.

exp/train/info/machine_info
exp/train/info/execution_time

(As sacred generates such files automatically, I proposed it at the first time.
However, the unintelligibility from the too rich system is not acceptable comparing with sacred's benefit.
)

Currenly exp/train/log is available, though I don't know which module generates it, so training time already can be shown, but I think it is good to redesign of this log file.
Do you have any other useful information during training?

@ShigekiKarita

This comment has been minimized.

Copy link
Contributor

commented Jun 19, 2019

On log files, maybe we should discuss in a new issue or #820

@sw005320

This comment has been minimized.

Copy link
Contributor

commented Jun 19, 2019

Do you have any other useful information during training?

  • Pointer to the cmvn file (or we can link it under experiment directory), but not here?
  • CUDA and CuDNN versions
  • Number of GPUs
  • Maximum GPU memory (I think this is difficult).
@codecov

This comment has been minimized.

Copy link

commented Jun 20, 2019

Codecov Report

Merging #874 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #874   +/-   ##
=======================================
  Coverage   50.19%   50.19%           
=======================================
  Files          88       88           
  Lines        9780     9780           
=======================================
  Hits         4909     4909           
  Misses       4871     4871

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update eef24a5...16efba0. Read the comment docs.

@kamo-naoyuki

This comment has been minimized.

Copy link
Contributor Author

commented Jun 20, 2019

I decided to finish this PR and I'll modify this tool in another thread.
Maybe, it is good to call show_result.sh at the last of run.sh and write it in exp/RESULT.md.

@sw005320

This comment has been minimized.

Copy link
Contributor

commented Jun 20, 2019

Maybe, it is good to call show_result.sh at the last of run.sh and write it in exp/RESULT.md.

Nice idea.
My only concern is that this will write a lot of duplicated results and some unwanted results. Of course, people can edit it by themselves though.

CONTRIBUTING.md Outdated Show resolved Hide resolved
CONTRIBUTING.md Outdated Show resolved Hide resolved
@kamo-naoyuki

This comment has been minimized.

Copy link
Contributor Author

commented Jun 21, 2019

My only concern is that this will write a lot of duplicated results and some unwanted results. Of course, people can edit it by themselves though.

I thought the same as you too.
Wait a moment. I'll support only this in this PR.

@kamo-naoyuki

This comment has been minimized.

Copy link
Contributor Author

commented Jun 21, 2019

By the way, well, I think markdown is useful for viewing on github, but at local machine environment, it is a bit troublesome.
Maybe, people would search a light tool for viewing markdown if we insert this tool in run.sh, but I don't know better choice for now. An editor such as Atom or a markdown extension(https://addons.mozilla.org/en-US/firefox/addon/markdown-viewer-chrome/) for browser looks useful.

@kamo-naoyuki

This comment has been minimized.

Copy link
Contributor Author

commented Jun 21, 2019

O.K. I changed to use $(find ${exp} --mindepth 0 --maxdepth 1 -type d) instead of $(ls ${exp}/*).
Now both usages are supported.

show_result.sh  # Show all results
show_result.sh  exp/train_foobar # Show result only  exp/train_foobar
@sw005320

This comment has been minimized.

Copy link
Contributor

commented Jun 21, 2019

By the way, well, I think markdown is useful for viewing on github, but at local machine environment, it is a bit troublesome.

I'm using the emacs markdown extension, but this is not the perfect solution, and I'm checking it by pasting it somewhere.

@sw005320 sw005320 merged commit 2a47298 into espnet:master Jun 21, 2019

7 checks passed

ci/circleci: test-centos7 Your tests passed on CircleCI!
Details
ci/circleci: test-debian9 Your tests passed on CircleCI!
Details
ci/circleci: test-ubuntu16 Your tests passed on CircleCI!
Details
ci/circleci: test-ubuntu18 Your tests passed on CircleCI!
Details
codecov/patch Coverage not affected when comparing eef24a5...16efba0
Details
codecov/project 50.19% remains the same compared to eef24a5
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details

@kamo-naoyuki kamo-naoyuki deleted the kamo-naoyuki:patch-8 branch Jun 21, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.