Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KSC recipe #5171

Merged
merged 21 commits into from
May 19, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
1 change: 1 addition & 0 deletions egs2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@ See: https://espnet.github.io/espnet/espnet2_tutorial.html#recipes-using-espnet2
| jv_openslr35 | Javanese | ASR | JAV | http://www.openslr.org/35 | |
| jvs | JVS (Japanese versatile speech) corpus | TTS | JPN | https://sites.google.com/site/shinnosuketakamichi/research-topics/jvs_corpus | |
| ksponspeech | KsponSpeech (Korean spontaneous speech) corpus | ASR | KOR | https://aihub.or.kr/aidata/105 | |
| ksc | Kazakh speech corpus | | | ASR | KAZ | https://www.openslr.org/102/ | |
| kss | Korean single speaker corpus | TTS | KOR | https://www.kaggle.com/bryanpark/korean-single-speaker-speech-dataset | |
| l3das22 | L3DAS22: Machine Learning for 3D Audio Signal Processing - ICASSP 2022 | SE | ENG | https://www.l3das.com/icassp2022/ | |
| laborotv | LaboroTVSpeech (A large-scale Japanese speech corpus on TV recordings) | ASR | JPN | https://laboro.ai/column/eg-laboro-tv-corpus-jp | |
Expand Down
1 change: 1 addition & 0 deletions egs2/TEMPLATE/asr1/db.sh
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,7 @@ JSSS=downloads
JSUT=downloads
JTUBESPEECH=downloads
JVS=downloads
KSC=downloads
KSS=
QASR_TTS=downloads
SNIPS= # smart-light-en-closed-field data path
Expand Down
34 changes: 34 additions & 0 deletions egs2/ksc/asr1/RESULTS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon May 15 16:32:55 CST 2023`
- python version: `3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]`
- espnet version: `espnet 202304`
- pytorch version: `pytorch 1.13.1`
- Git hash: `3949a7db023d591e91627efb997eda353b54005d`
- Commit date: `Thu May 11 17:54:50 2023 +0800`

## HuggingFace model link
- https://huggingface.co/espnet/khassan_KSC_transformer

## exp/asr_train_raw_bpe2000_sp/decode_asr_model_valid.acc.ave
### WER

|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|test|3334|35884|90.6|8.6|0.8|1.1|10.5|55.1|
|dev|3283|35275|89.0|10.0|1.0|1.2|12.1|59.0|

### CER

|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|test|3334|259552|97.9|1.2|0.9|0.8|2.9|55.1|
|dev|3283|253600|97.4|1.4|1.2|1.0|3.5|59.0|

### TER

|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|test|3334|71707|91.9|5.6|2.5|1.1|9.2|55.1|
|dev|3283|69428|90.5|6.7|2.8|1.3|10.8|59.0|
1 change: 1 addition & 0 deletions egs2/ksc/asr1/asr.sh
110 changes: 110 additions & 0 deletions egs2/ksc/asr1/cmd.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
# ====== About run.pl, queue.pl, slurm.pl, and ssh.pl ======
# Usage: <cmd>.pl [options] JOB=1:<nj> <log> <command...>
# e.g.
# run.pl --mem 4G JOB=1:10 echo.JOB.log echo JOB
#
# Options:
# --time <time>: Limit the maximum time to execute.
# --mem <mem>: Limit the maximum memory usage.
# -–max-jobs-run <njob>: Limit the number parallel jobs. This is ignored for non-array jobs.
# --num-threads <ngpu>: Specify the number of CPU core.
# --gpu <ngpu>: Specify the number of GPU devices.
# --config: Change the configuration file from default.
#
# "JOB=1:10" is used for "array jobs" and it can control the number of parallel jobs.
# The left string of "=", i.e. "JOB", is replaced by <N>(Nth job) in the command and the log file name,
# e.g. "echo JOB" is changed to "echo 3" for the 3rd job and "echo 8" for 8th job respectively.
# Note that the number must start with a positive number, so you can't use "JOB=0:10" for example.
#
# run.pl, queue.pl, slurm.pl, and ssh.pl have unified interface, not depending on its backend.
# These options are mapping to specific options for each backend and
# it is configured by "conf/queue.conf" and "conf/slurm.conf" by default.
# If jobs failed, your configuration might be wrong for your environment.
#
#
# The official documentation for run.pl, queue.pl, slurm.pl, and ssh.pl:
# "Parallelization in Kaldi": http://kaldi-asr.org/doc/queue.html
# =========================================================~


# Select the backend used by run.sh from "local", "stdout", "sge", "slurm", or "ssh"
cmd_backend='local'

# Local machine, without any Job scheduling system
if [ "${cmd_backend}" = local ]; then

# The other usage
export train_cmd="run.pl"
# Used for "*_train.py": "--gpu" is appended optionally by run.sh
export cuda_cmd="run.pl"
# Used for "*_recog.py"
export decode_cmd="run.pl"

# Local machine logging to stdout and log file, without any Job scheduling system
elif [ "${cmd_backend}" = stdout ]; then

# The other usage
export train_cmd="stdout.pl"
# Used for "*_train.py": "--gpu" is appended optionally by run.sh
export cuda_cmd="stdout.pl"
# Used for "*_recog.py"
export decode_cmd="stdout.pl"


# "qsub" (Sun Grid Engine, or derivation of it)
elif [ "${cmd_backend}" = sge ]; then
# The default setting is written in conf/queue.conf.
# You must change "-q g.q" for the "queue" for your environment.
# To know the "queue" names, type "qhost -q"
# Note that to use "--gpu *", you have to setup "complex_value" for the system scheduler.

export train_cmd="queue.pl"
export cuda_cmd="queue.pl"
export decode_cmd="queue.pl"


# "qsub" (Torque/PBS.)
elif [ "${cmd_backend}" = pbs ]; then
# The default setting is written in conf/pbs.conf.

export train_cmd="pbs.pl"
export cuda_cmd="pbs.pl"
export decode_cmd="pbs.pl"


# "sbatch" (Slurm)
elif [ "${cmd_backend}" = slurm ]; then
# The default setting is written in conf/slurm.conf.
# You must change "-p cpu" and "-p gpu" for the "partition" for your environment.
# To know the "partion" names, type "sinfo".
# You can use "--gpu * " by default for slurm and it is interpreted as "--gres gpu:*"
# The devices are allocated exclusively using "${CUDA_VISIBLE_DEVICES}".

export train_cmd="slurm.pl"
export cuda_cmd="slurm.pl"
export decode_cmd="slurm.pl"

elif [ "${cmd_backend}" = ssh ]; then
# You have to create ".queue/machines" to specify the host to execute jobs.
# e.g. .queue/machines
# host1
# host2
# host3
# Assuming you can login them without any password, i.e. You have to set ssh keys.

export train_cmd="ssh.pl"
export cuda_cmd="ssh.pl"
export decode_cmd="ssh.pl"

# This is an example of specifying several unique options in the JHU CLSP cluster setup.
# Users can modify/add their own command options according to their cluster environments.
elif [ "${cmd_backend}" = jhu ]; then

export train_cmd="queue.pl --mem 2G"
export cuda_cmd="queue-freegpu.pl --mem 2G --gpu 1 --config conf/queue.conf"
export decode_cmd="queue.pl --mem 4G"

else
echo "$0: Error: Unknown cmd_backend=${cmd_backend}" 1>&2
return 1
fi
2 changes: 2 additions & 0 deletions egs2/ksc/asr1/conf/decode.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
beam_size: 10
ctc_weight: 0.1
2 changes: 2 additions & 0 deletions egs2/ksc/asr1/conf/fbank.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
--sample-frequency=16000
--num-mel-bins=80
11 changes: 11 additions & 0 deletions egs2/ksc/asr1/conf/pbs.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Default configuration
command qsub -V -v PATH -S /bin/bash
option name=* -N $0
option mem=* -l mem=$0
option mem=0 # Do not add anything to qsub_opts
option num_threads=* -l ncpus=$0
option num_threads=1 # Do not add anything to qsub_opts
option num_nodes=* -l nodes=$0:ppn=1
default gpu=0
option gpu=0
option gpu=* -l ngpus=$0
1 change: 1 addition & 0 deletions egs2/ksc/asr1/conf/pitch.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
--sample-frequency=16000
12 changes: 12 additions & 0 deletions egs2/ksc/asr1/conf/queue.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Default configuration
command qsub -v PATH -cwd -S /bin/bash -j y -l arch=*64*
option name=* -N $0
option mem=* -l mem_free=$0,ram_free=$0
option mem=0 # Do not add anything to qsub_opts
option num_threads=* -pe smp $0
option num_threads=1 # Do not add anything to qsub_opts
option max_jobs_run=* -tc $0
option num_nodes=* -pe mpi $0 # You must set this PE as allocation_rule=1
default gpu=0
option gpu=0
option gpu=* -l gpu=$0 -q g.q
14 changes: 14 additions & 0 deletions egs2/ksc/asr1/conf/slurm.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Default configuration
command sbatch --export=PATH
option name=* --job-name $0
option time=* --time $0
option mem=* --mem-per-cpu $0
option mem=0
option num_threads=* --cpus-per-task $0
option num_threads=1 --cpus-per-task 1
option num_nodes=* --nodes $0
default gpu=0
option gpu=0 -p cpu
option gpu=* -p gpu --gres=gpu:$0 -c $0 # Recommend allocating more CPU than, or equal to the number of GPU
# note: the --max-jobs-run option is supported as a special case
# by slurm.pl and you don't have to handle it in the config file.
62 changes: 62 additions & 0 deletions egs2/ksc/asr1/conf/train.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
batch_type: folded
batch_size: 128
accum_grad: 1
max_epoch: 100
patience: 5
init: xavier_uniform
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
use_amp: true

encoder: transformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true

decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0

model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false

optim: adam
optim_conf:
lr: 1.0e-4
scheduler: warmuplr
scheduler_conf:
warmup_steps: 30000

specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.
- 0.05
num_time_mask: 10
1 change: 1 addition & 0 deletions egs2/ksc/asr1/db.sh
50 changes: 50 additions & 0 deletions egs2/ksc/asr1/local/data.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
#!/usr/bin/env bash
# Copyright 2023 ISSAI (author: Yerbolat Khassanov)
# Apache 2.0

set -e
set -u
set -o pipefail

log() {
local fname=${BASH_SOURCE[1]##*/}
echo -e "$(date '+%Y-%m-%dT%H:%M:%S') (${fname}:${BASH_LINENO[0]}:${FUNCNAME[1]}) $*"
}
help_message=$(cat << EOF
Usage: $0

Options:
--remove_archive (bool): true or false
With remove_archive=True, the archives will be removed after being successfully downloaded and un-tarred.
EOF
)
SECONDS=0

log "$0 $*"
. ./utils/parse_options.sh

. ./db.sh
. ./path.sh
. ./cmd.sh


if [ -z "${KSC}" ]; then
log "Error: \$KSC is not set in db.sh."
exit 2
fi

log "Download data to ${KSC}"
if [ ! -d "${KSC}" ]; then
mkdir -p "${KSC}"
fi

#absolute path
KSC=$(cd ${KSC}; pwd)

echo local/download_data.sh
local/download_data.sh "${KSC}"

echo local/prepare_data.sh
local/prepare_data.sh "${KSC}"

log "Successfully finished. [elapsed=${SECONDS}s]"
19 changes: 19 additions & 0 deletions egs2/ksc/asr1/local/download_data.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
#!/usr/bin/env bash

# Copyright 2023 ISSAI (author: Yerbolat Khassanov)
# Apache 2.0

KSC=$1
cd "${KSC}"

# Kazakh speech corpus (KSC):
if [ ! -e ISSAI_KSC_335RS_v1.1_flac ]; then
echo "$0: downloading KSC data (it won't re-download if it was already downloaded.)"
wget --continue https://www.openslr.org/resources/102/ISSAI_KSC_335RS_v1.1_flac.tar.gz || exit 1
tar xf "ISSAI_KSC_335RS_v1.1_flac.tar.gz"
else
echo "$0: not downloading or un-tarring ISSAI_KSC_335RS_v1.1_flac because it already exists."
fi


exit 0
Empty file added egs2/ksc/asr1/local/path.sh
Empty file.
36 changes: 36 additions & 0 deletions egs2/ksc/asr1/local/prepare_data.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
#!/usr/bin/env bash
# Copyright 2023 ISSAI (author: Yerbolat Khassanov)
# Apache 2.0

# To be run from one directory above this script.

. ./path.sh

KSC=$1

if [ -d "data" ]; then
echo "data directory exists. skipping data preparation!"
exit 0
fi

# Prepare: test, train, dev
for filename in $KSC/ISSAI_KSC_335RS_v1.1_flac/Meta/*; do
dir=data/$(basename "$filename" .csv)
rm -rf $dir && mkdir -p $dir
echo $dir

{
read
while IFS=" " read -r uttID others
do
echo "$uttID $(cat $KSC/ISSAI_KSC_335RS_v1.1_flac/Transcriptions/${uttID}.txt)" >> ${dir}/text
echo "$uttID $KSC/ISSAI_KSC_335RS_v1.1_flac/Audios_flac/${uttID}.flac" >> ${dir}/wav.scp
echo "$uttID $uttID" >> ${dir}/utt2spk
done
} < $filename

cat $dir/utt2spk | utils/utt2spk_to_spk2utt.pl > $dir/spk2utt
# Fix and check that data dirs are okay!
utils/fix_data_dir.sh $dir
utils/validate_data_dir.sh --no-feats $dir || exit 1
done
1 change: 1 addition & 0 deletions egs2/ksc/asr1/path.sh
1 change: 1 addition & 0 deletions egs2/ksc/asr1/pyscripts