-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactored code and small changes #21
Conversation
- Restructured the codebase as mentioned in the issue ARBML#18 - Unexpected changes: - Missing packages to run demo.ipynb: - jupyter - inflect - matplotlib - gdown - Change of hardcoded paths in FastSpeech2 to be passed paths to the module. - Removed inference.py as it doesn't correlate to anything actually being used in the library.
pyproject.toml
Outdated
|
||
[tool.poetry.dependencies] | ||
python = "^3.8" | ||
rich = "^12.2.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not think we are using rich in klaam here, are we?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just included rich because it makes the output nicer :p there isn't any harm for that. I can remove it.
mishkal = "^0.4.1" | ||
codernitydb3 = "^0.6.0" | ||
pypinyin = "^0.46.0" | ||
transformers = "^4.18.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not much familiar with poetry and pyproject.toml syntax. However, if '^' means this version or higher, then I do not think this is recommended. This is because backward compatibility is not guaranteed in such highly dynamic packages. I run into these issues before, one of these issues was with jiwer package :(.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are correct ^
means this version or higher, however, when it installs it will only install that exact version, not the latest version possible (to my knowledge). However, this tag is helpful when you have a new package that you want to install but it requires a high version of your already installed package, so poetry would install the new package and update the old package with a new version.
For example, I have installed numpy and got this version numpy = "^0.1.0"
, then I wanted to install pandas
which requires numpy >= 0.5.0
so poetry
will update numpy to be numpy = "^0.5.0"
and then install pandas
.
You can restrict the versioning of packages in two ways:
- When installing the package, explicitly state the version of the package. For example,
poetry add numpy=0.1.0
will install numpy with0.1.0
then append/updatepyproject.toml
to benumpy = "0.1.0"
. - Editing
pyproject.toml
to remove^
from the packages then reinstall the packages.
Both methods will install a specific version of numpy
but for any new library that requires a different version, poetry
will scream at you to tell you that your project has numpy=0.1.0
but the newer packages require a different version of numpy
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should stick with your approach @sudomaze . Thanks for the details.
pytest = "^7.1.1" | ||
pylint = "^2.13.5" | ||
autopep8 = "^1.6.0" | ||
gdown = "^4.4.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need this to download libraries from google drive. Should it be in the requirements section instead of dev_requirements?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with you. I will move it to dependencies
rather than dev-dependencies
.
pyproject.toml
Outdated
pylint = "^2.13.5" | ||
autopep8 = "^1.6.0" | ||
gdown = "^4.4.0" | ||
pandas = "1.3.5" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think pandas,tqdm should be in dependencies instead of dev-dependencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with you. I will move it to dependencies
rather than dev-dependencies
.
pyproject.toml
Outdated
readme = 'README.md' | ||
|
||
[tool.poetry.dependencies] | ||
python = "^3.8" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zaidalyafeai , I think most of the code runs on google colab. Google colab currently runs on python3.7 right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't run the project using python 3.7.13
as some packages require new versions. Google Colab does use python 3.7.13
, maybe we do need to resolve some conflicts with packages to be able to use python 3.7.13
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If Python3.8 is covered by most LTS Linux OSes ( I am assuming users will have Linux LTS in their machines) then this should be fine. Debian 11 supports Python3.9, Ubuntu 20.04 supports Python 3.8. I think it should be fine to stick with Python 3.8 and onwards. Google colab will update soon
.envs/environment.yml
Outdated
- ffmpeg | ||
- cudatoolkit | ||
- pytorch | ||
- torchaudio |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think soundfile package is missing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think soundfile
is a dependency of another package (maybe torchaudio
) that is installed by default. I have managed to run notebooks/demo.ipynb
without installing soundfile
explicitly.
To give more context, .env/dev_environment.yml
contains the same packages as .env/environment.yml
but CPU version so we can run CI on GitHub servers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right about soundfile [https://pytorch.org/audio/stable/backend.html]
.envs/environment.yml
Outdated
- python=3.8 | ||
- poetry | ||
# klaam dependencies | ||
- librosa |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am really confused which should be considered as a dev_environment vs environmlent and the choices of packages for each.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To give more context, .env/dev_environment.yml
contains the same packages as .env/environment.yml
but CPU version so we can run CI on GitHub servers.
What exists in environment.yml
are the core packages that are heavy to be installed, so we use conda
to install them while setting up the environment
.
notebooks/.gitignore
Outdated
@@ -0,0 +1,3 @@ | |||
* | |||
!*.ipynb | |||
!.gitignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really to have a .gitignore for each subdirectory?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main .gitignore
on the root directory is used to ignore everything globally (i.e. what is in klaam
directory, scripts
directory, and other directly related directories to the logic code like the data
directory). As notebooks
is an external module that isn't a logic component but a different type of scripts
, but for demoing. Here is my logic for having multiple .gitignore
:
.gitignore
= global things to be ignored and what is in logic code (i.e.klaam
).notebooks/.gitignore
= ignores what is notipynb
ckpts/.gitignore
= ignores everything besides the.gitignore
file. We don't GitHub to remove this folder when we push the code to GitHub servers.output/.gitignore
= same thing asckpts/.gitignore
#!/bin/bash | ||
# Usage: ./install.sh # klaam | ||
# Usage: ./install.sh --dev # klaam_dev | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please let me know what this bash code is doing? is it setting the conda environment whether it is dev or not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just a simple script that you run to do the following:
- Build the environment using
conda
- Install packages using
poetry
- Install left-over packages that couldn't be installed using
poetry
usingpip
./install.sh
installs the actual environment of klaam
that uses GPU, whereas ./install.sh --dev
installs the CPU version of the actual environment called klaam_dev
to be able to test the build and run it on GitHub servers through GitHub Actions (CI).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Thanks.
data/model_weights/hifigan/LICENSE
Outdated
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | ||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | ||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE | ||
SOFTWARE. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this file is added here by mistake?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have copied this file to where the content should exist. This was part of the FastSpeech2
directory (https://github.com/ARBML/klaam/tree/main/FastSpeech2/hifigan).
As we restructured the setup of the project, FastSpeech2
contained a lot of hard-coded relative paths to FastSpeech2
project, which isn't a good practice and won't allow the current setup to work unless we do some dirty hacks which could cause more problems in later developments of the project.
Therefore, I needed to move configs
and weights
files to be outside of the logic code. To follow the license information, I have copied the LICENSE
file where I have moved the configs
and weights
files.
This can be removed if you think it isn't necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before investing anything related to FastSpeech, I think we should remove it @zaidalyafeai please comment on this.
klaam/__init__.py
Outdated
@@ -0,0 +1 @@ | |||
from klaam.run import * |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really disagree with the wild card imports as it results in a lot of name space conflicts. I think from klaam.run import SpeechRecognition SpeechClassification TextToSpeech
should nail it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with you. I will update this to be more explicit to select modules.
push: | ||
branches: | ||
- main | ||
- dev/* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sudomaze, do you think we should have a stable branch that deploy to pypi directoy on merge or main is enough? I think we should also maintain another testing branch for test.pypi. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea that I had in mind was to publish to pypi
through GitHub Releases triggers, so we will have a GitHub Action of publishing only when something is released. This will limit any mistakes of publishing bad versions and this disconnects development code and stable releases.
We can add a nightly version of the package which pulls from a stable branch (i.e. nightly-release
branch).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see! so only one main branch is enought.
Thanks @sudomaze for this great PR.. I did my best commenting on points I did not understand/disagree. Thanks again. |
@MagedSaeed thanks for the thorough review! I will make the required changes when I have some time in the upcoming days. I have updated |
- Made the suggested changes by @MagedSaeed: - Removed `rich` in `pyproject.toml` - Moved `gdown` and `pandas` from `dev-dependencies` to `dependencies` in `pyproject.toml` - Re-built the environment and made the required changes to be able to use `python=3.7.13`. The issue I have had previously when setting up the environment was due to `pandas` installation. It seems that when adding `python="^3.7"` in `pyproject.toml` then installing a new package, `poetry` will try to ensure that the being installed package worked on all versions of `python` that is `>=3.7`. However, in the newer version of `pandas`, some modules changed drastically that it breaks when `python` is `3.8>=` vs. `>=3.8`. This also solved the problem of `numba` not being able to be installed using `poetry`. For more information, check [this issue](https://togithub.com/python-poetry/poetry/issues/2454#issuecomment-632868431). [^1] - Made additional changes to improve the setup: - As mentioned in [^1], fixing this issue allowed us to move all of the packages, that were included in `.envs/environment.yml` (e.g. `pytorch`, `torchaudio`, etc.), to `pyproject.toml` as we don't have the package installation issue. - Moved `mpy.ini` and `pytest.ini` configuration to `pyproject.toml` to centerlize the project's configurations. - Changed to `black` instead of `autopep8`. For more information, check [this article](https://www.kevinpeters.net/auto-formatters-for-python). - Fixed hard-coded paths in `cfgs/FastSpeech2/config/Arabic/preprocess.yaml` by passing a variable called `root_path`, which is the relative/abosulte of the root directory to `TextToSpeech` class. (TODO: need to test the setup on Google Colab) - Added `pre-commit` to the build setup which allows to check the format, auto-format, and other useful functionalities before making a commit. - Publishing attempt: - I have tried to publish the package to `PyPi` to test if the package is modulized enough that I can `pip install` it. Because of the size limitation (`100MB` max), I needed to remove the weights and config file in `klaam/external/FastSpeech2/hifigan`. - Managed to publish the package! Package [page](https://pypi.org/project/klaam/). This is just a test, so I will be removing this package once we have setup the CI for it as we have discussed before.
The latest commit's message:
Formated the code using |
Managed to make a CI to publish to |
Thanks @sudomaze for the updates. I just updated to your PR, tried to run Just a couple of things before merging to main. I only noticed that it really takes some time to resolve dependencies. Is this usual with poetry:? I also found out that if the operation is interrupted, I need to remove I tried to run the project on Python3.7 as you mentioned, but this fails regarding to a conflict in Once again, thanks much for this great effort. |
Yes, because we have a lot of dependencies and some of them, like
I would assume by cache as
If you pull my latest changes, you should be able to install |
Sorry for being a bit late @sudomaze . Got quite busy these days. I tried to run the training scripts but found some issues regarding the models and preprocessors import like:
I think I need to import these classes in |
Can you share or make some scripts that you think should work with the old setup? I am basing the success of my implementation on the |
The training scripts in the readme file. |
On main branch codebase, I have tried the following run_classifier.pyRun script python run_classifier.py \ > --model_name_or_path="facebook/wav2vec2-large-xlsr-53" \ > --output_dir=dialects-classificationv3 \ > --cache_dir=cache\ > --freeze_feature_extractor \ > --num_train_epochs="5" \ > --per_device_train_batch_size="16" \ > --preprocessing_num_workers="1" \ > --learning_rate="3e-5" \ > --warmup_steps="20" \ > --evaluation_strategy="steps"\ > --save_steps="1000" \ > --eval_steps="1000" \ > --save_total_limit="1" \ > --logging_steps="1000" \ > --do_eval \ > --do_train \ Error message Traceback (most recent call last): File "run_classifier.py", line 440, in main() File "run_classifier.py", line 256, in main parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments)) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/transformers/hf_argparser.py", line 71, in __init__ self._add_dataclass_arguments(dtype) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/transformers/hf_argparser.py", line 166, in _add_dataclass_arguments self._parse_dataclass_field(parser, field) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/transformers/hf_argparser.py", line 137, in _parse_dataclass_field parser.add_argument(field_name, **kwargs) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/argparse.py", line 1373, in add_argument return self._add_action(action) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/argparse.py", line 1736, in _add_action self._optionals._add_action(action) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/argparse.py", line 1577, in _add_action action = super(_ArgumentGroup, self)._add_action(action) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/argparse.py", line 1387, in _add_action self._check_conflict(action) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/argparse.py", line 1526, in _check_conflict conflict_handler(action, confl_optionals) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/argparse.py", line 1535, in _handle_conflict_error raise ArgumentError(action, message % conflict_string) argparse.ArgumentError: argument --gradient_checkpointing: conflicting option string: --gradient_checkpointing run_mgb3.pyRun script python run_mgb3.py \ > --model_name_or_path="facebook/wav2vec2-large-xlsr-53" \ > --output_dir=dialects-classificationv3 \ > --cache_dir=data_dir \ > --freeze_feature_extractor \ > --num_train_epochs="50" \ > --per_device_train_batch_size="32" \ > --preprocessing_num_workers="1" \ > --learning_rate="3e-5" \ > --warmup_steps="20" \ > --evaluation_strategy="steps"\ > --save_steps="100" \ > --eval_steps="100" \ > --save_total_limit="1" \ > --logging_steps="100" \ > --do_eval \ > --do_train Error message ... Downloading and preparing dataset egyption_speech_corpus/clean to data_dir/egyption_speech_corpus/clean/2.1.0/1d7beb58f0291b97069c539ff2a7e0d49e3fb3993a6d88415956acfd01e22911... Traceback (most recent call last): File "run_mgb3.py", line 523, in main() File "run_mgb3.py", line 308, in main train_dataset = datasets.load_dataset("egy_speech_corpus", split='train', cache_dir=model_args.cache_dir) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/datasets/load.py", line 1696, in load_dataset use_auth_token=use_auth_token, File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/datasets/builder.py", line 606, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/datasets/builder.py", line 1104, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/datasets/builder.py", line 701, in _download_and_prepare ) from None OSError: Cannot find data file. Original error: [Errno 2] No such file or directory: '/content/mgb3/adapt/Alaa/text_noverlap' run_common_voice.pyRun script python run_common_voice.py \ > --model_name_or_path="facebook/wav2vec2-large-xlsr-53" \ > --dataset_config_name="ar" \ > --output_dir=dialects-classificationv3 \ > --cache_dir=data_dir \ > --overwrite_output_dir \ > --num_train_epochs="1" \ > --per_device_train_batch_size="32" \ > --per_device_eval_batch_size="32" \ > --evaluation_strategy="steps" \ > --learning_rate="3e-4" \ > --warmup_steps="500" \ > --fp16 \ > --freeze_feature_extractor \ > --save_steps="10" \ > --eval_steps="10" \ > --save_total_limit="1" \ > --logging_steps="10" \ > --group_by_length \ > --feat_proj_dropout="0.0" \ > --layerdrop="0.1" \ > --gradient_checkpointing \ > --do_train --do_eval \ > --max_train_samples 100 --max_val_samples 100 Error message ... Traceback (most recent call last): File "run_common_voice.py", line 511, in main() File "run_common_voice.py", line 479, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/transformers/trainer.py", line 1422, in train tr_loss_step = self.training_step(model, inputs) File "run_common_voice.py", line 230, in training_step loss = self.compute_loss(model, inputs) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/transformers/trainer.py", line 2043, in compute_loss outputs = model(**inputs) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1721, in forward return_dict=return_dict, File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1347, in forward extract_features = self.feature_extractor(input_values) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 515, in forward hidden_states = conv_layer(hidden_states) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 390, in forward hidden_states = self.layer_norm(hidden_states) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/torch/nn/modules/normalization.py", line 190, in forward input, self.normalized_shape, self.weight, self.bias, self.eps) File "/home/alotaima/miniconda3/envs/klaam/lib/python3.7/site-packages/torch/nn/functional.py", line 2486, in layer_norm return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) RuntimeError: CUDA out of memory. Tried to allocate 1.38 GiB (GPU 0; 10.75 GiB total capacity; 5.85 GiB already allocated; 406.06 MiB free; 8.04 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF My thoughts on the errors:
Have you tested the code locally? Or does the current codebase only works on |
No only on colab. |
Don't you think it is a problem? This requires more refactoring 😫 |
@MagedSaeed it should be the following
|
I think this will also happen on colab as well if the later version of transformers is installed. I remember I stumbled into this error before. If I recall correctly, this is because this flag was introduced in early versions of the package. In later versions, they removed it, I guess. |
@zaidalyafeai @MagedSaeed If you prefer, we can set up a call to discuss project plans and what to do. What do you think? |
I'm on travel. If Maged can meet go ahead. |
It turns out that this issue has been resolved in #19 . I think you forked an older version of the code before this fix happened. Can you make sure to include these latest changes, please? |
@MagedSaeed this resolves one problem out of multiple problems caused by hardcoded paths that make the library only work for |
@sudomaze do not worry about the hardcoded, paths. I will try to figure it out somehow. At least, please make sure you pulled from the latest changes to fix the |
@MagedSaeed Per this commit, ca6935a, it seems that this PR was pulled from the latest changes, however, that commit didn't update |
@sudomaze I check from my side, and things seem to be fine for me. Thanks for the effort @sudomaze, Big Kudos. I am accepting the PR and applying the merge. @zaidalyafeai I am getting errors when running the training scripts as I do not have the datasets on my machine. Can you please make sure things are running as expected with the datasets you have? Also, we need to make sure that we are managing paths properly. |
@MagedSaeed thanks for the great feedback! I will update #18 to include the leftovers from this PR like fixing hard-coded paths, the removal of FastSpeech2, utilizing huggingface, etc. |
demo.ipynb
:- jupyter
- inflect
- matplotlib
- gdown
module.
being used in the library.