Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

omegaconf.errors.ConfigAttributeError: Key 'checkpoint_activations' not in 'HubertConfig' #4057

Closed
EmreOzkose opened this issue Dec 6, 2021 · 6 comments

Comments

@EmreOzkose
Copy link

馃悰 Bug

Hi,

When I tried to load a hubert model, I got this error:

Python 3.8.12 (default, Oct 12 2021, 13:49:34) 
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> import fairseq
>>> ckpt_path = "/path/to/fairseq/pretrained_models/hubert_xtralarge_ll60k_finetune_ls960_modified.pt"
>>> models, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([ckpt_path])
2021-12-06 10:55:12 | INFO | fairseq.tasks.hubert_pretraining | current directory is /path/to/fairseq/scripts
2021-12-06 10:55:12 | INFO | fairseq.tasks.hubert_pretraining | HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['ltr'], 'label_dir': None, 'label_rate': -1, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 300000, 'min_sample_size': None, 'single_target': True, 'random_crop': False, 'pad_audio': False}
2021-12-06 10:55:12 | INFO | fairseq.tasks.hubert_pretraining | current directory is /path/to/fairseq/scripts
2021-12-06 10:55:12 | INFO | fairseq.tasks.hubert_pretraining | HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': '/checkpoint/abdo/old_checkpoint02/datasets/librispeech/960h/raw_repeated', 'fine_tuning': False, 'labels': ['lyr9.km500'], 'label_dir': '/path/to/fairseq/scripts', 'label_rate': 50, 'sample_rate': 16000, 'normalize': True, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False}
2021-12-06 10:55:12 | INFO | fairseq.models.hubert.hubert | HubertModel Config: {'_name': 'hubert', 'label_rate': 50, 'extractor_mode': layer_norm, 'encoder_layers': 48, 'encoder_embed_dim': 1280, 'encoder_ffn_embed_dim': 5120, 'encoder_attention_heads': 16, 'activation_fn': gelu, 'dropout': 0.0, 'attention_dropout': 0.0, 'activation_dropout': 0.1, 'encoder_layerdrop': 0.1, 'dropout_input': 0.0, 'dropout_features': 0.0, 'final_dim': 1024, 'untie_final_proj': True, 'layer_norm_first': True, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2', 'conv_bias': False, 'logit_temp': 0.1, 'target_glu': False, 'feature_grad_mult': 0.0, 'mask_length': 10, 'mask_prob': 0.5, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 64, 'mask_channel_prob': 0.25, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': True}
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/path/to/fairseq/fairseq_latest/fairseq/checkpoint_utils.py", line 462, in load_model_ensemble_and_task
    model = task.build_model(cfg.model)
  File "/path/to/fairseq/fairseq_latest/fairseq/tasks/fairseq_task.py", line 335, in build_model
    model = models.build_model(cfg, self)
  File "/path/to/fairseq/fairseq_latest/fairseq/models/__init__.py", line 105, in build_model
    return model.build_model(cfg, task)
  File "/path/to/fairseq/fairseq_latest/fairseq/models/hubert/hubert_asr.py", line 146, in build_model
    w2v_encoder = HubertEncoder(cfg, task.target_dictionary)
  File "/path/to/fairseq/fairseq_latest/fairseq/models/hubert/hubert_asr.py", line 272, in __init__
    model = task.build_model(w2v_args.model)
  File "/path/to/fairseq/fairseq_latest/fairseq/tasks/fairseq_task.py", line 335, in build_model
    model = models.build_model(cfg, self)
  File "/path/to/fairseq/fairseq_latest/fairseq/models/__init__.py", line 105, in build_model
    return model.build_model(cfg, task)
  File "/path/to/fairseq/fairseq_latest/fairseq/models/hubert/hubert.py", line 302, in build_model
    model = HubertModel(cfg, task.cfg, task.dictionaries)
  File "/path/to/fairseq/fairseq_latest/fairseq/models/hubert/hubert.py", line 265, in __init__
    self.encoder = TransformerEncoder(cfg)
  File "/path/to/fairseq/fairseq_latest/fairseq/models/wav2vec/wav2vec2.py", line 858, in __init__
    if args.checkpoint_activations:
  File "/path/to/miniconda3/envs/fairseq/lib/python3.8/site-packages/omegaconf/dictconfig.py", line 305, in __getattr__
    self._format_and_raise(key=key, value=None, cause=e)
  File "/path/to/miniconda3/envs/fairseq/lib/python3.8/site-packages/omegaconf/base.py", line 95, in _format_and_raise
    format_and_raise(
  File "/path/to/miniconda3/envs/fairseq/lib/python3.8/site-packages/omegaconf/_utils.py", line 629, in format_and_raise
    _raise(ex, cause)
  File "/path/to/miniconda3/envs/fairseq/lib/python3.8/site-packages/omegaconf/_utils.py", line 610, in _raise
    raise ex  # set end OC_CAUSE=1 for full backtrace
  File "/path/to/miniconda3/envs/fairseq/lib/python3.8/site-packages/omegaconf/dictconfig.py", line 303, in __getattr__
    return self._get_impl(key=key, default_value=DEFAULT_VALUE_MARKER)
  File "/path/to/miniconda3/envs/fairseq/lib/python3.8/site-packages/omegaconf/dictconfig.py", line 361, in _get_impl
    node = self._get_node(key=key)
  File "/path/to/miniconda3/envs/fairseq/lib/python3.8/site-packages/omegaconf/dictconfig.py", line 383, in _get_node
    self._validate_get(key)
  File "/path/to/miniconda3/envs/fairseq/lib/python3.8/site-packages/omegaconf/dictconfig.py", line 135, in _validate_get
    self._format_and_raise(
  File "/path/to/miniconda3/envs/fairseq/lib/python3.8/site-packages/omegaconf/base.py", line 95, in _format_and_raise
    format_and_raise(
  File "/path/to/miniconda3/envs/fairseq/lib/python3.8/site-packages/omegaconf/_utils.py", line 694, in format_and_raise
    _raise(ex, cause)
  File "/path/to/miniconda3/envs/fairseq/lib/python3.8/site-packages/omegaconf/_utils.py", line 610, in _raise
    raise ex  # set end OC_CAUSE=1 for full backtrace
omegaconf.errors.ConfigAttributeError: Key 'checkpoint_activations' not in 'HubertConfig'
	full_key: w2v_args.checkpoint_activations
	reference_type=Optional[HubertConfig]
	object_type=HubertConfig

To Reproduce

I am following here.

Environment

  • fairseq Version (e.g., 1.0 or main): '1.0.0a0+0dfd6b6'
  • PyTorch Version '1.10.0+cu102'
  • OS (e.g., Linux): Ubuntu
  • How you installed fairseq (pip, source): pip install --editable ./
  • Python version: 3.8.12
  • CUDA/cuDNN version: CUDA Version: 11.1
  • GPU models and configuration: Tesla P100
@dan-wells
Copy link

This PR fixed it for me: #4043

@EmreOzkose
Copy link
Author

It works, thanks a lot :) @dan-wells

@qingyundou
Copy link

Hi, I am still having this issue (same code and error) after pulling the latest commits. Could you elaborate a bit more on the solution? I have read #4043 but did not figure out what to do.

Environment

same as above except for

  • PyTorch Version (e.g., 1.0): 1.10.0
  • Python version: 3.7.11
  • CUDA/cuDNN version: 11.0
  • GPU models and configuration: GeForce RTX 2080

@EmreOzkose
Copy link
Author

forget about the commit issue :). Just change this file:

https://github.com/pytorch/fairseq/pull/4043/files

@EmreOzkose
Copy link
Author

Actually just adding 3 lines to here, line206.

    checkpoint_activations: bool = field(
        default=False, metadata={"help": "recompute activations and save memory for extra compute"}
    )

@qingyundou
Copy link

that fixed the problem, thanks a lot @EmreOzkose !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants